libusb  1.0.27
A cross-platform user library to access USB devices
Multi-threaded applications and asynchronous I/O

libusb is a thread-safe library, but extra considerations must be applied to applications which interact with libusb from multiple threads.

The underlying issue that must be addressed is that all libusb I/O revolves around monitoring file descriptors through the poll()/select() system calls. This is directly exposed at the asynchronous interface but it is important to note that the synchronous interface is implemented on top of the asynchronous interface, therefore the same considerations apply.

The issue is that if two or more threads are concurrently calling poll() or select() on libusb's file descriptors then only one of those threads will be woken up when an event arrives. The others will be completely oblivious that anything has happened.

Consider the following pseudo-code, which submits an asynchronous transfer then waits for its completion. This style is one way you could implement a synchronous interface on top of the asynchronous interface (and libusb does something similar, albeit more advanced due to the complications explained on this page).

void cb(struct libusb_transfer *transfer)
{
int *completed = transfer->user_data;
*completed = 1;
}
void myfunc() {
struct libusb_transfer *transfer;
unsigned char buffer[LIBUSB_CONTROL_SETUP_SIZE] __attribute__ ((aligned (2)));
int completed = 0;
transfer = libusb_alloc_transfer(0);
libusb_fill_control_transfer(transfer, dev, buffer, cb, &completed, 1000);
while (!completed) {
poll(libusb file descriptors, 120*1000);
if (poll indicates activity)
}
printf("completed!");
// other code here
}
struct libusb_transfer * libusb_alloc_transfer(int iso_packets)
Allocate a libusb transfer with a specified number of isochronous packet descriptors.
Definition: io.c:1289
static void libusb_fill_control_transfer(struct libusb_transfer *transfer, libusb_device_handle *dev_handle, unsigned char *buffer, libusb_transfer_cb_fn callback, void *user_data, unsigned int timeout)
Helper function to populate the required libusb_transfer fields for a control transfer.
Definition: libusb.h:1810
static void libusb_fill_control_setup(unsigned char *buffer, uint8_t bmRequestType, uint8_t bRequest, uint16_t wValue, uint16_t wIndex, uint16_t wLength)
Helper function to populate the setup packet (first 8 bytes of the data buffer) for a control transfe...
Definition: libusb.h:1761
int libusb_submit_transfer(struct libusb_transfer *transfer)
Submit a transfer.
Definition: io.c:1486
@ LIBUSB_ENDPOINT_OUT
Out: host-to-device.
Definition: libusb.h:361
@ LIBUSB_REQUEST_TYPE_VENDOR
Vendor.
Definition: libusb.h:447
int libusb_handle_events_timeout(libusb_context *ctx, struct timeval *tv)
Handle any pending events.
Definition: io.c:2420
The generic USB transfer structure.
Definition: libusb.h:1356
unsigned char * buffer
Data buffer.
Definition: libusb.h:1407
void * user_data
User context data.
Definition: libusb.h:1404

Here we are serializing completion of an asynchronous event against a condition - the condition being completion of a specific transfer. The poll() loop has a long timeout to minimize CPU usage during situations when nothing is happening (it could reasonably be unlimited).

If this is the only thread that is polling libusb's file descriptors, there is no problem: there is no danger that another thread will swallow up the event that we are interested in. On the other hand, if there is another thread polling the same descriptors, there is a chance that it will receive the event that we were interested in. In this situation, myfunc() will only realise that the transfer has completed on the next iteration of the loop, up to 120 seconds later. Clearly a two-minute delay is undesirable, and don't even think about using short timeouts to circumvent this issue!

The solution here is to ensure that no two threads are ever polling the file descriptors at the same time. A naive implementation of this would impact the capabilities of the library, so libusb offers the scheme documented below to ensure no loss of functionality.

Before we go any further, it is worth mentioning that all libusb-wrapped event handling procedures fully adhere to the scheme documented below. This includes libusb_handle_events() and its variants, and all the synchronous I/O functions - libusb hides this headache from you.

libusb_handle_events() from multiple threads

Even when only using libusb_handle_events() and synchronous I/O functions, you can still have a race condition. You might be tempted to solve the above with libusb_handle_events() like so:

while (!completed) {
}
printf("completed!");
int libusb_handle_events(libusb_context *ctx)
Handle any pending events in blocking mode.
Definition: io.c:2440

This however has a race between the checking of completed and libusb_handle_events() acquiring the events lock, so another thread could have completed the transfer, resulting in this thread hanging until either a timeout or another event occurs. See also commit 6696512aade99bb15d6792af90ae329af270eba6 which fixes this in the synchronous API implementation of libusb.

Fixing this race requires checking the variable completed only after taking the event lock, which defeats the concept of just calling libusb_handle_events() without worrying about locking. This is why libusb-1.0.9 introduces the new libusb_handle_events_timeout_completed() and libusb_handle_events_completed() functions, which handles doing the completion check for you after they have acquired the lock:

while (!completed) {
}
printf("completed!");
int libusb_handle_events_completed(libusb_context *ctx, int *completed)
Handle any pending events in blocking mode.
Definition: io.c:2462

This nicely fixes the race in our example. Note that if all you want to do is submit a single transfer and wait for its completion, then using one of the synchronous I/O functions is much easier.

Note
The completed variable must be modified while holding the event lock, otherwise a race condition can still exist. It is simplest to do so from within the transfer callback as shown above.

The events lock

The problem is when we consider the fact that libusb exposes file descriptors to allow for you to integrate asynchronous USB I/O into existing main loops, effectively allowing you to do some work behind libusb's back. If you do take libusb's file descriptors and pass them to poll()/select() yourself, you need to be aware of the associated issues.

The first concept to be introduced is the events lock. The events lock is used to serialize threads that want to handle events, such that only one thread is handling events at any one time.

You must take the events lock before polling libusb file descriptors, using libusb_lock_events(). You must release the lock as soon as you have aborted your poll()/select() loop, using libusb_unlock_events().

Letting other threads do the work for you

Although the events lock is a critical part of the solution, it is not enough on it's own. You might wonder if the following is sufficient...

while (!completed) {
poll(libusb file descriptors, 120*1000);
if (poll indicates activity)
}
void libusb_lock_events(libusb_context *ctx)
Acquire the event handling lock, blocking until successful acquisition if it is contended.
Definition: io.c:1839
void libusb_unlock_events(libusb_context *ctx)
Release the lock previously acquired with libusb_try_lock_events() or libusb_lock_events().
Definition: io.c:1854

...and the answer is that it is not. This is because the transfer in the code shown above may take a long time (say 30 seconds) to complete, and the lock is not released until the transfer is completed.

Another thread with similar code that wants to do event handling may be working with a transfer that completes after a few milliseconds. Despite having such a quick completion time, the other thread cannot check that status of its transfer until the code above has finished (30 seconds later) due to contention on the lock.

To solve this, libusb offers you a mechanism to determine when another thread is handling events. It also offers a mechanism to block your thread until the event handling thread has completed an event (and this mechanism does not involve polling of file descriptors).

After determining that another thread is currently handling events, you obtain the event waiters lock using libusb_lock_event_waiters(). You then re-check that some other thread is still handling events, and if so, you call libusb_wait_for_event().

libusb_wait_for_event() puts your application to sleep until an event occurs, or until a thread releases the events lock. When either of these things happen, your thread is woken up, and should re-check the condition it was waiting on. It should also re-check that another thread is handling events, and if not, it should start handling events itself.

This looks like the following, as pseudo-code:

retry:
if (libusb_try_lock_events(ctx) == 0) {
// we obtained the event lock: do our own event handling
while (!completed) {
goto retry;
}
poll(libusb file descriptors, 120*1000);
if (poll indicates activity)
}
} else {
// another thread is doing event handling. wait for it to signal us that
// an event has completed
while (!completed) {
// now that we have the event waiters lock, double check that another
// thread is still handling events for us. (it may have ceased handling
// events in the time it took us to reach this point)
// whoever was handling events is no longer doing so, try again
goto retry;
}
}
}
printf("completed!\n");
void libusb_lock_event_waiters(libusb_context *ctx)
Acquire the event waiters lock.
Definition: io.c:1983
int libusb_event_handler_active(libusb_context *ctx)
Determine if an active thread is handling events (i.e.
Definition: io.c:1918
void libusb_unlock_event_waiters(libusb_context *ctx)
Release the event waiters lock.
Definition: io.c:1994
int libusb_event_handling_ok(libusb_context *ctx)
Determine if it is still OK for this thread to be doing event handling.
Definition: io.c:1889
int libusb_try_lock_events(libusb_context *ctx)
Attempt to acquire the event handling lock.
Definition: io.c:1796
int libusb_handle_events_locked(libusb_context *ctx, struct timeval *tv)
Handle any pending events by polling file descriptors, without checking if any other threads are alre...
Definition: io.c:2490
int libusb_wait_for_event(libusb_context *ctx, struct timeval *tv)
Wait for another thread to signal completion of an event.
Definition: io.c:2026

A naive look at the above code may suggest that this can only support one event waiter (hence a total of 2 competing threads, the other doing event handling), because the event waiter seems to have taken the event waiters lock while waiting for an event. However, the system does support multiple event waiters, because libusb_wait_for_event() actually drops the lock while waiting, and reacquires it before continuing.

We have now implemented code which can dynamically handle situations where nobody is handling events (so we should do it ourselves), and it can also handle situations where another thread is doing event handling (so we can piggyback onto them). It is also equipped to handle a combination of the two, for example, another thread is doing event handling, but for whatever reason it stops doing so before our condition is met, so we take over the event handling.

Four functions were introduced in the above pseudo-code. Their importance should be apparent from the code shown above.

  1. libusb_try_lock_events() is a non-blocking function which attempts to acquire the events lock but returns a failure code if it is contended.
  2. libusb_event_handling_ok() checks that libusb is still happy for your thread to be performing event handling. Sometimes, libusb needs to interrupt the event handler, and this is how you can check if you have been interrupted. If this function returns 0, the correct behaviour is for you to give up the event handling lock, and then to repeat the cycle. The following libusb_try_lock_events() will fail, so you will become an events waiter. For more information on this, read The full story below.
  3. libusb_handle_events_locked() is a variant of libusb_handle_events_timeout() that you can call while holding the events lock. libusb_handle_events_timeout() itself implements similar logic to the above, so be sure not to call it when you are "working behind libusb's back", as is the case here.
  4. libusb_event_handler_active() determines if someone is currently holding the events lock

You might be wondering why there is no function to wake up all threads blocked on libusb_wait_for_event(). This is because libusb can do this internally: it will wake up all such threads when someone calls libusb_unlock_events() or when a transfer completes (at the point after its callback has returned).

The full story

The above explanation should be enough to get you going, but if you're really thinking through the issues then you may be left with some more questions regarding libusb's internals. If you're curious, read on, and if not, skip to the next section to avoid confusing yourself!

The immediate question that may spring to mind is: what if one thread modifies the set of file descriptors that need to be polled while another thread is doing event handling?

There are 2 situations in which this may happen.

  1. libusb_open() will add another file descriptor to the poll set, therefore it is desirable to interrupt the event handler so that it restarts, picking up the new descriptor.
  2. libusb_close() will remove a file descriptor from the poll set. There are all kinds of race conditions that could arise here, so it is important that nobody is doing event handling at this time.

libusb handles these issues internally, so application developers do not have to stop their event handlers while opening/closing devices. Here's how it works, focusing on the libusb_close() situation first:

  1. During initialization, libusb opens an internal pipe, and it adds the read end of this pipe to the set of file descriptors to be polled.
  2. During libusb_close(), libusb writes some dummy data on this event pipe. This immediately interrupts the event handler. libusb also records internally that it is trying to interrupt event handlers for this high-priority event.
  3. At this point, some of the functions described above start behaving differently:
  4. The above changes in behaviour result in the event handler stopping and giving up the events lock very quickly, giving the high-priority libusb_close() operation a "free ride" to acquire the events lock. All threads that are competing to do event handling become event waiters.
  5. With the events lock held inside libusb_close(), libusb can safely remove a file descriptor from the poll set, in the safety of knowledge that nobody is polling those descriptors or trying to access the poll set.
  6. After obtaining the events lock, the close operation completes very quickly (usually a matter of milliseconds) and then immediately releases the events lock.
  7. At the same time, the behaviour of libusb_event_handling_ok() and friends reverts to the original, documented behaviour.
  8. The release of the events lock causes the threads that are waiting for events to be woken up and to start competing to become event handlers again. One of them will succeed; it will then re-obtain the list of poll descriptors, and USB I/O will then continue as normal.

libusb_open() is similar, and is actually a more simplistic case. Upon a call to libusb_open():

  1. The device is opened and a file descriptor is added to the poll set.
  2. libusb sends some dummy data on the event pipe, and records that it is trying to modify the poll descriptor set.
  3. The event handler is interrupted, and the same behaviour change as for libusb_close() takes effect, causing all event handling threads to become event waiters.
  4. The libusb_open() implementation takes its free ride to the events lock.
  5. Happy that it has successfully paused the events handler, libusb_open() releases the events lock.
  6. The event waiter threads are all woken up and compete to become event handlers again. The one that succeeds will obtain the list of poll descriptors again, which will include the addition of the new device.

Closing remarks

The above may seem a little complicated, but hopefully I have made it clear why such complications are necessary. Also, do not forget that this only applies to applications that take libusb's file descriptors and integrate them into their own polling loops.

You may decide that it is OK for your multi-threaded application to ignore some of the rules and locks detailed above, because you don't think that two threads can ever be polling the descriptors at the same time. If that is the case, then that's good news for you because you don't have to worry. But be careful here; remember that the synchronous I/O functions do event handling internally. If you have one thread doing event handling in a loop (without implementing the rules and locking semantics documented above) and another trying to send a synchronous USB transfer, you will end up with two threads monitoring the same descriptors, and the above-described undesirable behaviour occurring. The solution is for your polling thread to play by the rules; the synchronous I/O functions do so, and this will result in them getting along in perfect harmony.

If you do have a dedicated thread doing event handling, it is perfectly legal for it to take the event handling lock for long periods of time. Any synchronous I/O functions you call from other threads will transparently fall back to the "event waiters" mechanism detailed above. The only consideration that your event handling thread must apply is the one related to libusb_event_handling_ok(): you must call this before every poll(), and give up the events lock if instructed.