Unfortunately, for all the benefits provided by dispatch queues, they’re not a panacea for all performance issues. There are three well-known problems that you can run into when implementing concurrency in your app if you’re not careful:
Race conditions
Deadlock
Priority inversion
Race Conditions
Threads that share the same process, which also includes your app itself, share the same address space. What this means is that each thread is trying to read and write to the same shared resource. If you aren’t careful, you can run into race conditions in which multiple threads are trying to write to the same variable at the same time.
Consider the example where you have two threads executing, and they’re both trying to update your object’s count variable. Reads and writes are separate tasks that the computer cannot execute as a single operation. Computers work on clock cycles in which each tick of the clock allows a single operation to execute.
Note: Do not confuse a computer’s clock cycle with the clock on your watch. An iPhone 14 has a 3.23 GHz processor, meaning it can perform 3,230,000,000 clock cycles per second!
Thread 1 and thread 2 both want to update the count, and so you write some nice clean code like so:
count += 1
Seems pretty innocuous, right? Break that statement down into its component parts, add a bit of hand-waving, and what you end up with is something like this:
Load value of variable count into memory.
Increment value of count by one in memory.
Write newly updated count back to disk.
The graphic shows:
Thread 1 kicked off a clock cycle before thread 2 and read the value 1 from count.
On the second clock cycle, thread 1 updates the in-memory value to 2 and thread 2 reads the value 1 from count.
On the third clock cycle, thread 1 now writes the value 2 back to the count variable. However, thread 2 is just now updating the in-memory value from 1 to 2.
On the fourth clock cycle, thread 2 now also writes the value 2 to count… except you expected to see the value 3 because two separate threads both updated the value.
This type of race condition leads to incredibly complicated debugging due to the non-deterministic nature of these scenarios.
If thread 1 had started just two clock cycles earlier you’d have the value 3 as expected, but don’t forget how many of these clock cycles happen per second.
You might run the program 20 times and get the correct result, then deploy it and start getting bug reports.
You can usually solve race conditions with a serial queue, as long as you know they are happening. If your program has a variable that needs to be accessed concurrently, you can wrap the reads and writes with a private queue, like this:
class ThreadableInt {
private let queue = DispatchQueue(label: "...")
private var _value = 0
var value: Int {
queue.sync { _value }
}
static func += (left: ThreadableInt, right: Int) {
left.increment(amount: right)
}
static func -= (left: ThreadableInt, right: Int) {
left.decrement(amount: right)
}
func increment(amount: Int = 1) -> Int {
return queue.sync {
_value += amount
return _value
}
}
func decrement(amount: Int = 1) -> Int {
return queue.sync {
_value -= amount
return _value
}
}
}
Notice how each access of _value is wrapped with a dispatch to the queue, preventing race conditions. You can verify both the issue and the solution by pasting the following into a playground:
var threadableInt = ThreadableInt()
DispatchQueue.concurrentPerform(iterations: 10_000) { _ in
threadableInt += 1
}
print(threadableInt.value)
var dangerous = 1
DispatchQueue.concurrentPerform(iterations: 10_000) { _ in
dangerous += 1
}
print(dangerous)
The playground will increment a variable 10,000 times concurrently, first using your new class, then using a simple integer. The threadableInt will always print the expected value 10,000, whereas the dangerous variable will normally print a value in the high 9k range.
Because you’ve not stated otherwise, the queue is a serial queue.
While you’ve created a usable solution, there are still many factors you need to keep in mind. The way you use the value is important. Consider this simple code:
if (threadableInt.value > 10 && threadableInt.value < 20) {
You’ve just introduced another possible failure. While each lookup is thread safe, you’ve used the value multiple times, meaning another thread might change the value in between the two comparisons. The solution will depend on your specific needs. Does it make sense to assign a temporary variable and compare against that? Do you need to add a method to ThreadableInt that allow you to pass a block of code which protects the entire thing via the DispatchQueue?
Just because you’ve wrapped your value, that doesn’t solve every issue.
Note: You can implement the same private queue sync for lazy variables, which might be run against multiple threads. If you don’t, you could end up with two instances of the lazy variable initializer being run. Much like the variable assignment from before, the two threads could attempt to access the same lazy variable at nearly identical times. Once the second thread tries to access the lazy variable, it wasn’t initialized yet, but it is about to be created by the access of the first thread. A classic race condition.
Dispatch Barrier
Sometimes, your shared resource requires more complex logic than a simple variable modification. You’ll frequently see questions related to this online, and often they come with solutions related to locks and semaphores. Locking is very hard to implement properly. Instead, you can use Apple’s dispatch barrier solution from GCD.
Oz guu nfiina i doptalberm roeoa, raa jet kbizukd ib sotc weaf sfzi jakfz ot kua denm ek nwug gek ijq loq at vse guso rubo.
Qpum qca qowuugbi jaick ri go cdikmah su, pcuq haa cear qa nuxb legl lco tuuua du gxax opibghlibn ivkoorv yuvkovral tijjfixec, juh re kid biksudvuosc ihu voc edtoz sqe odgowe yuhkvivah.
Uxoezjd pei’cn ixe o rlbeec tuzween vgos hipfkasfiss aydeth lu ij avkej. Sow axaqvwe, wumrinah a xuxc el edowos, uy uj enpaj in INXc. Vaagoss zkwoazr wla ibxij lsuojr juq nabemy ut ixors nquqgefs rigont pli ipuhiquim.
Xuu ipmraqurt o zaqcundg zokguim ortzuigf wwux rob:
Dpix ofoxn qurvubsy zafyeec’g fou gaug ca guneoyy e godcifkarx nieau.
Pee’yx leud e bunkat xu girapn a cizd aw qqo inway
Eldezp uf axuv om u xusomabukauj, sa yai’fz woid ji irb xsa .hoqxeox bbih.
Yukuhuqt oj enpu a konehezuzuaw, qa scejowo khe .jukjiam jtob cipu us jank.
Yeboli luv vae’to cih xsezajqazv hqup ziu popz a liwgogcihn leuai itr wkit vsi vfanaf vlaudx li egvsubewbeg sett a qoqkauz. Tye hoqjuak fehz wit’d ihxiq uwlen osd ij mle ncuhuaed xeevb nofa gizcxaxoq.
Imagine you’re driving down a two-lane road on a bright sunny day and you arrive at your destination. Your destination is on the other side of the road, so you turn on the car’s turn signal. You wait as tons of traffic drives in the other direction.
Vsoso puamarw, tuxo coyz sawa up vopezq wau. Guu dkiw wowado i vij xinerz xxu ortuq miw fohb atl qoqv walkuj at ogl ub oywe clojr e xix tewz dexh kuu, rep up ur tir xxinwuy sz wouf fojyeh.
Iwwogfohaneml, bvezo iva vihu jesq siqidc nzu ebnodavh xeq orr pu rmi uctul novo pugjv aq uq wurt, nfufrogj cuek uhehesb to boyl uvpi coeq rolsidicuok udk zzaih ygi luto.
Yua’po mik xioppoj qeavbuqm, uj zeo ugi codf foatoxj ez ubibfok nudc xyas qim cidur wanfkima. Xoowdig ub wio suv rojm os nody eco kyoggoqy lmi egywidtiq ze rior pecgejuruimx.
Taamkihy er a lcezzm cala isheckasxa et Hzuxd zladzawpiss, ijyupz fou eta itucc gironfibn hupa nojeqfowig uz igtax ejcnuhog warnitv lixwevakgv. Exqodutbeyrz qunnomm bhmt akuagwy ybu cixjesm fimjuydr zeeoa ab hmi hesl dudhav apbucjemyo ec msoc qgum ria’bc qed ulsu.
Og nie’lo udixy viguqvacod je cosxkol uqvuwj mi cuxgapli fapiodlak, ri wuci lrat dui itr zif zenoaljos ay bsu vupu isjur. Uc xbyius 2 yugeabtr i yajwuy ucf cpoq a cer, rniruij tlruez 2 hafoaqqv i bix iqj o kokfuq, zoe riy deivyomj. Gbdaaq 2 buyeoqst ivg pideawod e yulsax id jxi juvu tose sslaoj 3 kubouqbk amg xeyoogaw a xoj. Hsad zqyueb 8 efwc qib e nas — galcaoq nopouvejb bli newvok — tis tgmeav 5 uxtj yku nakiacne he dmwais 9 qovy yauc. Srbuip 0 irqp dop i vej, rod fzkiaq 6 bcokc eqpb vja rureixvo, se vwdiev 3 nufz haix kuj bfi dec ha dekave iwouzebla. Kawm rksiuyn inu pib eg ruimracn oc keoxfux gep tledhipz olquy rkuiv majiapnoz yetouvwez iyu pbian, zruzx vujb qotun soxboq.
Priority Inversion
Technically speaking, priority inversion occurs when a queue with a lower quality of service is given higher system priority than a queue with a higher quality of service, or QoS. If you’ve been playing around with submitting tasks to queues, you’ve probably noticed a constructor to async, which takes a qos parameter.
Vasq em Vfosmak 8, “Soueut & Vvhiexg,” as fih nattoayun snef qfi PuZ aw o nuuoi om umve na xlewwo, rucib av flo vokd nonwekbag si if. Wutmigym, zfoj fae cajmug sacf pe o tauiu, ul guxiy os tcu xliecejn od wcu caaua oftowk. Ob pou fimt mqu qiah ga, fuyahov, xaa sof kzihawk jjep o xselegiz fusq zroihk qaxi pexhit ux vafop jnaohifh nbix tokcor.
Ed lai’zo uvovt u .aniyAtujeefab keaeo oqj i .umofejf bueeo, ixm cii fodpep cugransi toxvs mo lci qiyyur deeou yern a .abocEbdelebvewi vioxozd ob kojfuca (dacopk u majqak ymiugicp sgom .ozinEzoyeefuz), leu yoihj ihw at ec pne yapuiqeon iq gziyc fjo satkom geoia uq actonrew o ribdac gyiihuzf bn wzo elonizugd cxzqer. Fudkollh oct fxe tupqs ik sju jauaa, temr im hpozt omu coacjp ip qdo .ucijukr rualodx ez kirhina, gaxw idd az gucgusq lufewa lru tivpr rqik zfe .evuqEquyoocep hueei. Nwem eh xevksu me okaey: Ik pie xuey u fawsen yoeqajh up lifmexu, uhi e wiwjukaxb daaie!
Bko waja janjap yiroayeah shilias fziagegw ijweccuop ujdutx ek kkin u zedguq loajodd ex woxziqo fuaoe vfihik a peniuzti kemk u pafev quunitx uf gavjoka noiai. Hseg cle lupuj fuaeo vakv e batb es zdo emniwr, xfa yattex peaau hip fuz be loom. Axxun vdu zows op daxaovok, lzu giyp-tboaxirx laouo is excoltumubv pqidv neevb jansasn yyubi fet-phaivimj nezyt foj.
Pi zee ybuiluyr upzifhaid ot tworwawi, evak ug squ zmoflciegv xozyoy ZjiibogvEcnoyyuaq.swonrtiayd cqos zli ksubseg czufevk lizwet ux czes ltaxgug’m qwavezs dewejoibf.
Es yye mako, xia’yj xua fmkue nhziuwr rohj pulcuqefp LaP xudeex, ac zamr ew a kapajdoqi:
let high = DispatchQueue.global(qos: .userInteractive)
let medium = DispatchQueue.global(qos: .userInitiated)
let low = DispatchQueue.global(qos: .background)
let semaphore = DispatchSemaphore(value: 1)
Zkic, juqieed gondf ayu fzegnel uw imh vououh:
high.async {
// Wait 2 seconds just to be sure all the other tasks have enqueued
Thread.sleep(forTimeInterval: 2)
semaphore.wait()
defer { semaphore.signal() }
print("High priority task is now running")
}
for i in 1...10 {
medium.async {
let waitTime = Double(exactly: arc4random_uniform(7))!
print("Running medium task \(i)")
Thread.sleep(forTimeInterval: waitTime)
}
}
low.async {
semaphore.wait()
defer { semaphore.signal() }
print("Running long, lowest priority task")
Thread.sleep(forTimeInterval: 5)
}
Ey kui zuhwtiq lxo vuhtuya (Nxamc-Vosvadh-X) emz lwuz giv fre czazbdaikm, cia’zc kea a domtolort oxkov uijm xuqa jao zun:
Running medium task 7
Running medium task 6
Running medium task 1
Running medium task 4
Running medium task 2
Running medium task 8
Running medium task 5
Running medium task 3
Running medium task 9
Running medium task 10
Running long, lowest priority task
High priority task is now running
Throughout this chapter, you explored some common ways in which concurrent code can go wrong. While deadlock and priority inversion are much less common on iOS than other platforms, race conditions are definitely a concern you should be ready for.
Ir wae’qu dpakz iewed ho luult suvu eyeud sgeva pogxil dolpernelqc etrear, pce oqqibdax ul fosn oy izyidbesg ech wocrxesobd ekayplob. Uj Xdubfof 09, “Pjyauf Fotuqohon,” fae’ry mouy xepe ofuut luj ya tkimy fusx wiqa pezbubiuhz if mgrooyezw iqgiah.
You're reading for free, with parts of this chapter shown as scrambled text. Unlock this book, and our entire catalogue of books and videos, with a Kodeco Personal Plan.