Skip to content
Snippets Groups Projects
  • Adam Williamson's avatar
    30b43690
    API calls: raise original exception when retries exhausted · 30b43690
    Adam Williamson authored
    
    This is an alternative approach to solve the problem identified
    in #279 . When doing API calls with `retry_on_error` set to
    true (the default), if the call fails after all retries are
    exhausted, we raise our own `MaximumRetriesExceeded` exception,
    which tells you nothing about what actually went wrong, just
    that we exhausted the retry count. It seems much more useful
    to raise an exception with information about why the connection
    is failing.
    
    This changes that behaviour so that, on most paths, we would
    raise an appropriate exception from requests (either the one
    we caught, or an HTTPError via `raise_for_status`) if we exhaust
    the retry count. Only if we fail due to database lag would we
    still raise `MaximumRetriesExceeded` (because there isn't really
    an underlying exception we can handily raise in that case).
    
    Note that it was already possible to hit
    `stream.raise_for_status()` before this change, if the status
    code was outside the 500 range, so in theory callers should
    already be prepared for that. It was not possible to get a
    `requests.exceptions.ConnectionError` or
    `requests.exceptions.Timeout` with `retry_on_error` true, though,
    so this is genuinely a behaviour change on that front.
    
    Signed-off-by: default avatarAdam Williamson <awilliam@redhat.com>
    30b43690
    History
    API calls: raise original exception when retries exhausted
    Adam Williamson authored
    
    This is an alternative approach to solve the problem identified
    in #279 . When doing API calls with `retry_on_error` set to
    true (the default), if the call fails after all retries are
    exhausted, we raise our own `MaximumRetriesExceeded` exception,
    which tells you nothing about what actually went wrong, just
    that we exhausted the retry count. It seems much more useful
    to raise an exception with information about why the connection
    is failing.
    
    This changes that behaviour so that, on most paths, we would
    raise an appropriate exception from requests (either the one
    we caught, or an HTTPError via `raise_for_status`) if we exhaust
    the retry count. Only if we fail due to database lag would we
    still raise `MaximumRetriesExceeded` (because there isn't really
    an underlying exception we can handily raise in that case).
    
    Note that it was already possible to hit
    `stream.raise_for_status()` before this change, if the status
    code was outside the 500 range, so in theory callers should
    already be prepared for that. It was not possible to get a
    `requests.exceptions.ConnectionError` or
    `requests.exceptions.Timeout` with `retry_on_error` true, though,
    so this is genuinely a behaviour change on that front.
    
    Signed-off-by: default avatarAdam Williamson <awilliam@redhat.com>