That's Billion with a B: Scaling to the next level at WhatsApp

Rick Reed WhatsApp Erlang Factory SF March 7, 2014

1

About Me Joined WhatsApp in 2011 Learned Erlang at WhatsApp Scalability & multimedia

Team Small (~10 on Erlang) Handle development and ops

2

Erlang Awesome choice for WhatsApp Scalability Non-stop operations

3

Numbers 465M monthly users 19B messages in & 40B out per day 600M pics, 200M voice, 100M videos 147M concurrent connections 230K peak logins/sec 342K peak msgs in/sec, 712K out

4

Multimedia Holiday Cheer 146Gb/s out (Christmas Eve) 360M videos downloaded (Christmas Eve) 2B pics downloaded (46k/s) (New Years Eve) 1 pic downloaded 32M times (New Years Eve)

5

System Overview Offline storage Phones

chat chat chat chat chat chat chat chat chat chat chat chat

Account Profile Push Group

mms mms mms mms mms mms mms mms mms mms

...

6

Output scale

Messages, Notifications, & Presence 7

Throughput scale (4 of 16 partitions) psh311 | ERL msg------------------ dist---------------- wan----------------------time | nodes qlen qmax nzq recv msgin msgout kbq wanin wanout nodes kbq 02/25 07:30:01 408 0 0 0 435661 221809 135867 0 0 0 0 0 02/25 08:00:00 408 0 0 0 446658 227349 140331 0 0 0 0 0 02/25 08:30:01 408 0 0 0 454529 231521 143486 0 0 0 0 0 02/25 09:00:01 408 0 0 0 455700 231930 143928 0 0 0 0 0 02/25 09:30:00 408 0 0 0 453770 231048 143467 0 0 0 0 0 mnes----------------------------tminq tmoutq tmin tmout nodes 0 0 11371 11371 4 0 0 11418 11420 4 0 0 11473 11472 4 0 0 11469 11468 4 0 0 11257 11254 4

io-in---out sched gc--- mem--kb/s kb/s %util /sec tot Mb 32511 39267 44.0 47860 166808 33502 39693 45.4 49255 166817 34171 40460 46.3 50212 166830 34306 40811 46.5 50374 166847 34159 40763 46.3 50208 166870

(2 of 16 partitions) prs101 | ERL msg------------- dist--------- mnes---------------------- sched mem--time | nodes qlen qmax recv msgin msgout tminq tmoutq tmin tmout %util tot Mb 02/24 10:00:00 400 0 0 357383 174975 104489 0 0 76961 76999 27.7 15297 02/24 10:30:00 400 0 0 352178 172389 102970 0 0 75913 75893 27.3 15352 02/24 11:00:01 400 0 0 347643 170111 101688 0 0 74894 74916 27.0 15227 02/24 11:30:01 400 0 0 341300 167085 99822 0 0 73467 73478 26.6 15170

8

Db scale (1 of 16 partitions) ======================================================================================== Active Tables Local Copy Type Records Bytes ---------------------------------------------------------------------------------------mmd_obj2(128) disc_copies 165,861,476 32,157,681,888 mmd_reclaim disc_copies 5,898,714 861,434,424 mmd_ref3(128) disc_copies 932,819,505 168,494,166,624 mmd_upload2(128) disc_copies 1,874,045 262,430,920 mmd_xcode3(128) disc_copies 7,786,188 2,430,697,040 schema disc_copies 514 568,664 ---------------------------------------------------------------------------------------Total 1,114,240,442 204,206,979,560 ========================================================================================

9

Hardware Platform ~ 550 servers + standby gear ~150 chat servers (~1M phones each) ~250 mms servers 2x2690v2 Ivy Bridge 10-core (40 threads total) 64-512 GB RAM SSD (except video) Dual-link GigE x 2 (public & private)

> 11,000 cores 10

Software Platform FreeBSD 9.2 Erlang R16B01 (+patches)

11

Improving scalability Decouple Parallelize Decouple Optimize/Patch Decouple Monitor/Measure Decouple 12

Decouple Attempt to isolate trouble/bottlenecks Downstream services (esp. non-essential) Neighboring partitions

Asynchronicity to minimize impact of latency on throughput

13

Decouple Avoid mnesia txn coupling: async_dirty Use calls only when returning data, else cast Make calls w/ timeouts only: no monitors Non-blocking casts (nosuspend) sometimes Large distribution buffers

14

Parallelize Work distribution: start with gen_server Spread work to multiple workers: gen_factory Spread dispatch to multiple procs: gen_industry Worker select via key (for db) or FIFO (for i/o)

Partitioned services Usu. 2-32 partitions pg2 addressing Primary/secondary (usu. in pairs) 15

Parallelize mnesia Mostly async_dirty Isolate records to 1 node/1 process via hashing Each frag read/written on only 1 node

Multiple mnesia_tm: parallel replication streams Multiple mnesia dirs: parallel i/o during dumps Multiple mnesia “islands” (usu. 2 nodes/isle) Better schema ops completion Better load-time coordination 16

Decouple Avoid head-of-line blocking Separate read & write queues Separate inter-node queues Avoid blocking when single node has problem Node-to-node message forwarding mnesia async_dirty replication

“Queuer” FIFO worker dispatch

17

Optimize Offline storage I/O bottleneck I/O bottleneck writing to mailboxes Most messages picked up very quickly Add write-back cache with variable sync delay Can absorb overloads via sync delay pop/s msgs/p nonz% cach% xcac% synca maxa 12694 5.9 24.7 78.3 98.7 21 51182

rd/s push/s 41 17035

wr/s 10564

18

Optimize Offline storage (recent improvements) Fixed head-of-line blocking in async file i/o (BEAM patch to enable round-robin async i/o)

More efficient handling of large mailboxes Keep large mailboxes from polluting cache

19

Optimize Overgrown SSL session cache Slow connection setup Lowered cache timeout

20

Optimize Slow access to mnesia table with lots of frags Account table has 512 frags Sparse mapping over islands/partitions After adding hosts, throughput went down! Unusually slow record access On a hunch, looked at ets:info(stats) Hash chains >2K (target is 7). Oops.

21

Optimize mnesia frags (cont.) Small percentage of hash buckets being used ets uses average chain length to trigger split #define MAX_HASH 0xEFFFFFFFUL #define INVALID_HASH 0xFFFFFFFFUL +#define HASH_INITVAL 33554467UL /* optimised version of make_hash (normal case? atomic key) */ #define MAKE_HASH(term) \ ((is_atom(term) ? (atom_tab(atom_val(term))->slot.bucket.hvalue) : \ make_hash2(term)) % MAX_HASH) + make_hash2_init(term, HASH_INITVAL)) % MAX_HASH)

22

Patch FreeBSD 9.2 No more patches Config for large network & RAM

23

Patch Our original BEAM/OTP config/patches Allocator config (for best superpage fit) Real-time OS scheduler priority Optimized timeofday delivery Increased bif timer hash width Improved check_io allocation scalability Optimized prim_inet / inet accepts Larger dist receive buffer 24

Patch Our original config/patches (cont.) Add pg2 denormalized group member lists Limit runq task stealing Add send w/ prepend Add port reuse for prim_file:write_file Add gc throttling w/ large message queues

25

Patch New patches (since EFSF 2012 talk) Add multiple timer wheels Workaround mnesia_tm selective receive Add multiple mnesia_tm async_dirty senders Add mark/set for prim_file commands Load mnesia tables from nearby node

26

Patch New patches (since EFSF 2012 talk) (cont.) Add round-robin scheduling for async file i/o Seed ets hash to break coincidence w/ phash2 Optimize ets main/name tables for scale Don't queue mnesia dump if already dumping

27

Decouple Meta-clustering Limit size of any single cluster Allow a cluster to span long distances wandist: dist-like transport over gen_tcp Mesh-connected functional groups of servers

Transparent routing layer just above pg2 Local pg2 members published to far-end All messages are single-hop 28

Meta-clustering

DC2 main cluster

DC1 main cluster

global clusters DC1 mms cluster

DC2 mms cluster

29

Topology

DC1 main cluster Acct cluster DC1 DC2

DC1 mms cluster

DC2 mms cluster

30

Routing Cluster 1

{last,1} {last,2}

Cluster 2

{last,1} {last,2} wandist {last,3}

{last,3} {last,4}

service client

{last,4} pg2 Other cluster-local services

31

Clearing the minefield Generally able to detect/defuse scalability mines before they explode Events which test the system World events (esp. soccer) Server failures (usu. RAM) Network failures Bad software pushes

32

Clearing the minefield Not always successful: 2/22 outage Began with back-end router glitch Mass node disconnect/reconnect Resulted in a novel unstable state Unsuccessful in stabilizing cluster (esp. pg2) Full stop & restart (first time in years) Also uncovered an overly-coupled subsystem Rolling out pg2 patch 33

Challenges Db scaling, esp. MMS Load time (~1M objects/sec) Load failures (unrecoverable backlog) Bottlenecked on disk write throughput (>700MB/s) Patched a selective-receive issue, but more to go

Real-time cluster status & control at scale A bunch of csshX windows no longer enough

Power-of-2 partitioning 34

Questions? rr@ whatsapp.com @td_rr GitHub: reedr/otp

35

Monitor/Measure Per-node system metrics gathering 1-second and 1-minute polling Pushed to Graphite for plotting

Per-node alerting script OS limits (CPU, mem, network, disk) BEAM (running, msgq backlog, sleepy scheds)

App-level metrics Pushed to Graphite 36

Monitor/Measure Capacity plan against system limits CPU util%, Mem util%, Disk full%, Disk busy%

Watch for process message queue backlog Generally strive to remove all back pressure Bottlenecks show as backlog Alert on backlog > threshold (usu. 500k)

37

Monitor/Measure

38

Monitor/Measure

39

Monitor/Measure

40

Monitor/Measure

41

Monitor/Measure

42

Monitor/Measure

43

Input scaling

Logins 44

How WhatsApp is scaling up - Erlang Factory

Mar 7, 2014 - 1. That's Billion with a B: Scaling to the next level at WhatsApp. Rick Reed .... Downstream services (esp. non-essential) .... Network failures.

959KB Sizes 11 Downloads 448 Views

Recommend Documents

Erlang/OTP and how the PRNGs work - Erlang Factory
Mar 30, 2012 - Kenji Rikitake / Erlang Factory SF Bay 2012. 1 ... R15B can handle IPv6 services. • Address format is ... Network part: 64, Host part: 64. Address ...

Build a data platform over and on the web Erlang ... - Erlang Factory
Build a data platform over and on the web. Erlang User Conference 2013 ... Works with mobile & embedded devices. ○ Partials upload & downloads supported.

Scaling Up All Pairs Similarity Search - WWW2007
data from the DBLP server, and on two real-world web applications: generating recommendations for the Orkut social network, and computing pairs of similar ...

Scaling Up All Pairs Similarity Search - WWW2007
on the World Wide Web, to appear. [14] A. Moffat, R. Sacks-Davis, R. Wilkinson, & J. Zobel (1994). Retrieval of partial documents. In The Second Text REtrieval. Conference, 181-190. [15] A. Moffat & J. Zobel (1996). Self-indexing inverted files for f

mnesia mnesia - Erlang
Storage of transaction data ... Optionally append to transaction log - disk_log .... recover locker subscr loader monitor mnemosyne sup checkpoint checkpoint ...

WhatsApp PDF.pdf
Facebook la ha comprado a Jan Koum. El texto dice que en su infancia recurría a. la beneficencia para comer. 3. ¿Cuántos empleados tiene la empresa?

WhatsApp Plus (WhatsApp+) JiMODs Apk for Android ...
Free Download WhatsApp Plus (WhatsApp+) JiMODs Apk for Android ... untuk update terbaru dari Whatsapp adalah Whatsapp Plus Mod For Android Full ...

How High Is Up? Chapter 21 -
SAGE as a way to clarify the meaning of social entrepreneurship and to .... SIFE's worldwide network of students, academic professionals, ..... transnational organizations, prepared to organize global campaigns and ..... By the year 2015, I.

How High Is Up? Chapter 21 -
had been used for political and corporate advertising on gigantic freeway billboards. ... In Jerr's view, the business was an ideal social enterprise—it made a profit .... I shook my head when I saw that the board room was named after. Rohrs. .....

Erlang/OTP and how the PRNGs work - Kenji Rikitake Professional ...
Sep 23, 2011 - Contents. RNGs implemented in Erlang/OTP. •crypto and random modules and their issues. SIMD-oriented Fast Mersenne Twister. (SFMT) on pure Erlang and with NIFs. •implementation issues. •performance evaluation. Conclusions and fut