Dynamic iSCSI at Scale: Remote paging at Google Nick Black Linux Plumbers Conference, August 2015, Seattle

Linux Plumbers Conference 2015, Seattle WA

Goals of this presentation ❏

Discuss remote paging of binaries at scale, and its motivation ❏ ❏



Share our experience with iSCSI on Linux ❏ ❏ ❏



Experimenting with paging binaries and their support data from remote, fast storage This requires a robust implementation of highly dynamic iSCSI What's working well? What could be improved? How is our use case different from typical ones? In what ways have we needed to modify the kernel?

Learn what we could do to improve our use of iSCSI / kernel implementation ❏

We'd like to become more involved in Linux's iSCSI and block device projects

Linux Plumbers Conference 2015, Seattle WA

2

Performance problems with local cheap disks ❏ ❏ ❏ ❏ ❏ ❏

Lowest throughput of the local memory hierarchy Highest latency of the local memory hierarchy Unpredictable behavior, especially under load Fetch + page-in times can dominate a task's runtime Slow power control transitions Slowest task in a highly parallelized pipeline can slow down entire job

Linux Plumbers Conference 2015, Seattle WA

3

The cluster enlarges our memory hierarchy ❏

Thousands of machines, each with some number of ❏ ❏ ❏ ❏



Multicore processors with multilevel SRAM/EDRAM caches DDR3/DDR4 DRAM DIMMs (possibly NUMA) Flash storage and/or magnetic storage (IOCH and/or PCIe) Gigabit Ethernet or 10GigE NICs (PCIe, possibly channel-bonded)

Cluster (common power sources, flat intracluster network bandwidth) ❏ ❏ ❏ ❏

Tens of Gbps to each machine from single Tbps switch Single Tbps to each switch in tens-of-Tbps superblocks Tens of Tbps to each superblock in Pbps cluster fabric Tens of thousands of machines in a cluster

Linux Plumbers Conference 2015, Seattle WA

4

Memory hierarchy of generic warehouse computers ❏

DRAM provides hundreds of Gbps, low hundreds of ns latency, fed by either... ❏ ❏

PCIe 3.0 x8: 63Gbps, µs latency + 10GigE NIC: 10Gbps, several µs latency (plus wildly variable remote serving latency)

...or... ❏ ❏ ❏

Local SATA3: 4.8Gbps, µs latency + Local SSD: low Gbps, µs latency or Local HDD: low hundreds of Mbps, tens of ms latency, terrible tail latency

Linux Plumbers Conference 2015, Seattle WA

5

Better performance through network paging pt 1 ❏

The SATA3 bus provides 4.8Gbps of usable throughput, but... ❏ ❏



The network can provide 10Gbps of usable throughput ❏ ❏ ❏



A low-cost drive might average ~800Mbps on realistic read patterns ...and average several tens of milliseconds of seek time for each chunk PCIe bus and QPI can handle it Dozens of times more bandwidth than the SATA3 bus Latencies in microseconds

Disk server can saturate the network ❏ ❏

Caching effects among machines leaves common data in disk server DRAM Disk servers can be outfitted with expensive high-throughput store (PCIe SSD etc.)



Write case can't take advantage of intermachine caching, but the network won't introduce delay compared to local disk write (it can take advantage of quality remote store) Linux Plumbers Conference 2015, Seattle WA

6

Better performance through network paging pt 2 ❏

Take advantage of demand paging ❏ ❏ ❏ ❏



No longer sucking down the full binary + data set to disk Grab, on demand, only the pages we need from remote Fewer total bytes transferred No useless bytes going through local/remote page caches

Take full advantage of improving technologies ❏ ❏

CPU, memory, and disk size are all getting better Spinning disk seek times, throughput seem at a wall



Spinning disk performance / size ratio is getting steadily worse (efficient utilization of magnetic storage results in steadily worsening performance)

Linux Plumbers Conference 2015, Seattle WA

7

Binaries and support files: read-only iSCSI ❏

Packages built as ext4 images+metadata ❏



Pushed on demand to disk servers implementing custom iSCSI target ❏ ❏ ❏



Kept in global distributed storage (POSIX interface, smart redundancy, etc.) Lowest-level distributed filesystem nodes: no redundancy at this level Distribution infrastructure maintains a ratio of reachable copies per task Pushes new target lists to initiator to allow dynamic target instances

Custom iSCSI initiator drives modified Linux kernel iSCSI-over-TCP transport ❏ ❏ ❏

Sets up a dm-verity device atop a dm-multipath (MPIO, not MC/S) Connects to multiple independent remote iSCSI targets Hands off connections to the kernel, one to an iSCSI session ❏ Makes new connections on connection failure or if instructed

Linux Plumbers Conference 2015, Seattle WA

8

Load balancing through dm-multipath ❏

Round-robin: Fill up the IOP queue, then move to the next one ❏ ❏



Queue length: Select path based off the shortest queue ❏



We have purposely set target queue depths set fairly low; would result in rapid cycling Doesn't allow backing off from a single loaded target Bytes per IOP are dynamic, but prop delay is likely less than round-trip time

Service time: Dynamic recalculation based on throughput

Linux Plumbers Conference 2015, Seattle WA

9

Locally-fetched package distribution at scale pt 1 ❏

Alyssa P. Hacker changes her LISP experiment, perhaps a massive neural net to determine whether ants can be trained to sort tiny screws in space. ❏ ❏ ❏



Assume 20,000 tasks, immediately schedulable Each task instance needs 3 packages, totalling .5GB (4Gb) Expected CPU time of each task, assuming an ideal preloaded page cache, is 120s

20K tasks * 4Gb compulsory load == 80Tb mandatory distribution ❏

Assuming 10Gbps bandwidth, ideal page cache, and ideal disk...



Serialized fetches: Average task delayed by 4,000s, 97% of total task time, 33x slowdown Worst case task (8,000s) paces job: 2hr+ to job completion, 6677% slowdown



Fully parallel fetches: Ideal exponential distribution (requiring compute node p2p) requires lg2 (20000) = 15 generations of .4s each, worst case 6s, job requires 126s, 5% slowdown We can approach .4s total by initiating p2p send before complete reception, .03% slowdown



Linux Plumbers Conference 2015, Seattle WA

10

Locally-fetched package distribution at scale pt 2 ❏





Introduce a single oversubscribed compute node fetching to contended disk ❏

The process must evict 128MB of (possibly not yet written-through) data



Another process acquires and releases 128MB, possibly requiring a load from disk



The process pages back in some or all of its 128MB

If each phase takes 2s, 6s is added to the task runtime. ❏

Worst task cases are now 6s, 5% slowdown



In reality, many such delays accumulate for at least one task, all due to paging to/from disk

A damaged sector might result in a 30s delay, 25% slowdown Linux Plumbers Conference 2015, Seattle WA

11

Remotely-paged packages at scale ❏

No compute node peer-to-peer (p2p retained in target distribution level) ❏ ❏ ❏



Only demanded pages traverse page caches or networks ❏ ❏ ❏



Assume n compute nodes per disk nodes We can distribute in time approaching one copy with exponential p2p (.4s) n compute nodes then grab p pages of P total, worst case approaches .4s * (np/P + 1) Fewer compulsory delays offsets lack of last-level p2p Compulsory delays are smoother over the life of most tasks Task container can be allocated less memory

Eliminate the annoyances of local spinning disk ❏ ❏ ❏

Tail latencies are much better controlled -- very few slow / contended reads Redundancy -- dm-multipath allows us to fail over quickly Permit radical new physical setups

Linux Plumbers Conference 2015, Seattle WA

12

Coping with an unreliable userspace iscsid ❏

Kernel expects an userspace iSCSI control daemon to always be around ❏ ❏



Becomes particularly problematic in the face of connection errors ❏ ❏ ❏



Alas, this expectation cannot always be met (OOMs, crashes, load, etc.) Restart/schedulability might take time, races result in lost kevents We want immediate failover to a standby session via dm-multipath iSCSI wants to do connection recovery via external agent No one seems to know whether kernel MC/S works (OpenISCSI initiator doesn't use it)

We disable the connection recovery timeout, immediately hitting error path ❏ ❏

Session dies, error bubbles up to dm-multipath, immediate failover Userspace initiator gets to it eventually and creates a new session for multipath device

Linux Plumbers Conference 2015, Seattle WA

13

User-initiated stop can race with kernel ❏ ❏ ❏ ❏ ❏

We still want to deliver the connection stop message, but we don't want to delay connection teardown waiting for userspace. Can't just disable userspace-initiated connection stop, as it's necessary for changing up targets and standard client-side termination. Added locking to iscsi_sw_tcp_release_conn Messy interaction between sk->sk_callback_lock and tcp_sw_conn>lock Upstream indicated lack of interest in this solution, but it seems difficult to do reliable, fast fail recovery with MPIO without it, and upstream doesn't want MC/S on the initiator side Linux Plumbers Conference 2015, Seattle WA

14

Why no MC/S (Multiple Connections per Session)? ❏ ❏ ❏

LIO in-kernel target does support MC/S Competitor initiator+targets support MC/S There's at least some support in the kernel dataplane initiator ❏



MC/S only supports one target within the session ❏



What is the state of this code? Userspace initiator doesn't use it No good for multitarget load balancing

Mailing list has pushed for MPIO (dm-multipath) to be used exclusively ❏ ❏ ❏

Requires reliable termination of sessions with failed connections (previous slides) Ignorance of command numbering complicates load balancing Difficult to rapidly recover from temporarily-unavailable targets

Linux Plumbers Conference 2015, Seattle WA

15

Winning: lower job start times

Linux Plumbers Conference 2015, Seattle WA

16

Winning: faster tasks ❏ ❏ ❏ ❏

These graphs reflect a 3.11.10-based kernel Missing scsi-mq and other 2014/2015 improvements Nowhere near theoretical ideal, but already a big win 4.x.x rebase ought improve things for free

Linux Plumbers Conference 2015, Seattle WA

17

Dynamic iSCSI at Scale- Remote paging at ... - Research at Google

Pushes new target lists to initiator to allow dynamic target instances ... Service time: Dynamic recalculation based on throughput. 9 ... Locally-fetched package distribution at scale pt 1 .... No good for multitarget load balancing ... things for free.

303KB Sizes 4 Downloads 603 Views

Recommend Documents

Shasta: Interactive Reporting At Scale - Research at Google
online queries must go all the way from primary storage to user- facing views, resulting in .... tions, a user changing a single cell in a sorted UI table can induce subtle changes to .... LANGUAGE. As described in Section 3, Shasta uses a language c

Software Defined Networking at Scale - Research at Google
Google Confidential and Proprietary. Google's Global CDN. Page 7. Google Confidential and Proprietary. B4: Software Defined inter-Datacenter WAN. Page 8 ...

Experimenting At Scale With Google Chrome's ... - Research at Google
users' interactions with websites are at risk. Our goal in this ... sites where warnings will appear. The most .... up dialog together account for between 12 and 20 points (i.e., ... tions (e.g., leaking others' social media posts), this may not occu

Mathematics at - Research at Google
Index. 1. How Google started. 2. PageRank. 3. Gallery of Mathematics. 4. Questions ... http://www.google.es/intl/es/about/corporate/company/history.html. ○.

Tera-scale deep learning - Research at Google
The Trend of BigData .... Scaling up Deep Learning. Real data. Deep learning data ... Le, et al., Building high-‐level features using large-‐scale unsupervised ...

Web-scale Image Annotation - Research at Google
models to explain the co-occurence relationship between image features and ... co-occurrence relationship between the two modalities. ..... screen*frontal apple.

Optimizing Google's Warehouse Scale ... - Research at Google
various corunning applications on a server, non-uniform memory accesses (NUMA) .... In addition, Gmail backend server jobs are not run in dedicated clusters.

Large-scale speaker identification - Research at Google
promises excellent scalability for large-scale data. 2. BACKGROUND. 2.1. Speaker identification with i-vectors. Robustly recognizing a speaker in spite of large ...

Scalable Dynamic Nonparametric Bayesian ... - Research at Google
cation, social media and tracking of user interests. 2 Recurrent Chinese .... For each storyline we list the top words in the left column, and the top named entities ...

Continuous Pipelines at Google - Research at Google
May 12, 2015 - Origin of the Pipeline Design Pattern. Initial Effect of Big Data on the Simple Pipeline Pattern. Challenges to the Periodic Pipeline Pattern.

Accuracy at the Top - Research at Google
We define an algorithm optimizing a convex surrogate of the ... as search engines or recommendation systems, since most users of these systems browse or ...

ParaView - Data Science at Scale
scientists to visualize and analysis extremely large data sets. The tool ..... For advanced users who wish to create complex program graphs, the program graph.

Faucet - Research at Google
infrastructure, allowing new network services and bug fixes to be rapidly and safely .... as shown in figure 1, realizing the benefits of SDN in that network without ...

BeyondCorp - Research at Google
41, NO. 1 www.usenix.org. BeyondCorp. Design to Deployment at Google ... internal networks and external networks to be completely untrusted, and ... the Trust Inferer, Device Inventory Service, Access Control Engine, Access Policy, Gate-.

VP8 - Research at Google
coding and parallel processing friendly data partitioning; section 8 .... 4. REFERENCE FRAMES. VP8 uses three types of reference frames for inter prediction: ...

JSWhiz - Research at Google
Feb 27, 2013 - and delete memory allocation API requiring matching calls. This situation is further ... process to find memory leaks in Section 3. In this section we ... bile devices, such as Chromebooks or mobile tablets, which typically have less .

Yiddish - Research at Google
translation system for these language pairs, although online dictionaries exist. ..... http://www.unesco.org/culture/ich/index.php?pg=00206. Haifeng Wang, Hua ...

traits.js - Research at Google
on the first page. To copy otherwise, to republish, to post on servers or to redistribute ..... quite pleasant to use as a library without dedicated syntax. Nevertheless ...

sysadmin - Research at Google
On-call/pager response is critical to the immediate health of the service, and ... Resolving each on-call incident takes between minutes ..... The conference has.

Introduction - Research at Google
Although most state-of-the-art approaches to speech recognition are based on the use of. HMMs and .... Figure 1.1 Illustration of the notion of margin. additional ...

References - Research at Google
A. Blum and J. Hartline. Near-Optimal Online Auctions. ... Sponsored search auctions via machine learning. ... Envy-Free Auction for Digital Goods. In Proc. of 4th ...

BeyondCorp - Research at Google
Dec 6, 2014 - Rather, one should assume that an internal network is as fraught with danger as .... service-level authorization to enterprise applications on a.

Browse - Research at Google
tion rates, including website popularity (top web- .... Several of the Internet's most popular web- sites .... can't capture search, e-mail, or social media when they ..... 10%. N/A. Table 2: HTTPS support among each set of websites, February 2017.