multithreading - How many is too many for create dispatch_queues in GCD (grand central dispatch)? -


there wonderful article lightweight notification system built in swift, mike ash: (https://www.mikeash.com/pyblog/friday-qa-2015-01-23-lets-build-swift-notifications.html).

the basic idea create objects can "listen" i.e. invoke callback on when there state change. make thread-safe, each object created holds own dispatch_queue. dispatch_queue used gate critical sections:

dispatch_sync(self.myqueue) {     // modify critical state in self } 

and won't in high contention. kind of struck fact every single object create can listened makes own dispatch queue, purposes of locking few lines of code.

one poster suggested os_spinlock faster , cheaper; maybe, use lot less space.

if program creates hundreds or thousands (or tens of thousands of objects) should worry creating many dispatch queues? won't ever listened to, might.

it makes sense 2 objects not block each other, i.e. have separate locks, , wouldn't think twice embedding, say, pthread_mutex in each object, entire dispatch queue? ok?

well, documentation on grand central dispatch vague inner workings & exact costs of dispatch queues, state that:

gcd provides , manages fifo queues application can submit tasks in form of block objects. blocks submitted dispatch queues executed on pool of threads managed system.

so, sounds queues no more interface queueing blocks through thread pool, , therefore have no/minimal impact on performance when idle.

the conceptual documentation states that:

you can create many serial queues need

which sounds there's trivial cost creating serial dispatch queue, , leaving idle.

furthermore, decided test creating 10,000 serial , concurrent dispatch queues on app open gl content, , didn't find performance impacted in way, fps remained same, , utilised 4mb of ram (~400 bytes single queue).

in terms of using os_spinlock instead of dispatch queues, apple very clear in it's documentation migrating away threads gcd more efficient using standard locks (at least in contended cases).

replacing lock-based code queues eliminates many of penalties associated locks , simplifies remaining code. instead of using lock protect shared resource, can instead create queue serialize tasks access resource. queues not impose same penalties locks. example, queueing task not require trapping kernel acquire mutex.

although it's worth noting can release queue if you're not using , re-create later when needs using again, if concerned memory.


tl;dr

dispatch queues way go. don't need worry creating lots of queues , not using them, , they're more efficient locks.

edit: found spinlock faster in un-contended situations, you'll want use this!


Comments