-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Functions which cannot fail #43
Comments
See also: |
Other examples of functions which cannot fail in the proposed PR python/cpython#112135 which adds a public API for the PyTime functions.
|
If
So:
I would rather design a new If performance is critical, let's an I don't buy the ease of use argument. A function that isn't used too often, and happens to not require error handling, will be painful to review (after we make the API consistent enough that reviewers don't need to look up every single function). Examples which cannot fail:
Yes, for underscored and unstable functions, which we can simply remove/replace if their behaviour/signature needs to change, this guideline isn't useful.
See above :)
I'd rather remove this from the API altogether, but, yes: reference counting is a good exception:
This always sets an exception, so, it has a way of signaling failure. Counter-examples of functions which cannot report where they can fail:
Corner cases:
I don't think this is a corner case. If we add new API for this, the dealloc function should raise normally and the caller should take care of “handling” the exception.
The interpreter might not be in a consistent enough state that exceptions can be raised, right?
If this was designed today, it should raise an
I don't think this is a corner case -- it should have a way to report exceptions. |
I don't know where this new constraint comes from. Would mind you to create an issue to elaborate on the ability to deprecate any fuction? Sure, it's appealing to have the ability to be able to emit warnings and treat them as errors. But I don't think that it overrides another aspects on the API design. |
That's #5 :) |
The Py_HashPointer() function was added without error handling, whereas the PyTime_Time() function was added with error handling.
I'm not longer sure of the purpose of this issue. It doesn't propose anything, it was more a general discussion. IMO we should discuss error handling on a case by case basis. In general, functions should have a way to report errors to the caller. "Cannot fail" should remain the exception. |
The
PyTuple_Check(obj)
function cannot fail and the caller is not expected to check for errors. Expected usage:Here, "cannot fail" means that if you pass a NULL pointer, the code does crash.
I'm fine with strongly suggesting to check for errors in the general cases. I just ask for exceptions for specific cases.
The issue #5 requires all new functions to enforce the caller to always check for errors. IMO we need exceptions to that rule.
For example, PR python/cpython#112096 proposes adding
Py_hash_t PyHash_Pointer(const void *ptr)
function which cannot fail. It would be annoying to have to check for an hypothetical error if the current implementation cannot fail, and it's unlikely that the function will change to report errors.What's matter here is to provide a convenient API, more than correctness. Obviously, if we enforce checking the result, it would be easier to change the API later. IMO it's just not worth it here.
For example, a function is unlikely to fail right now and is the future if:
uint64_t
ordouble
.Some functions are also designed in a way so that they cannot fail. For example,
PyUnicode_EqualToUTF8()
cannot fail. But if the first argument is not a Unicode object, the function does crash. It's a design choice. Example of usage:Having to check for errors on such basic operation "compare two strings" sounds really annoying. For example, the C strcmp() function cannot fail, it's the same. Obviously, if you pass NULL pointer or strings which are not terminated by NUL, strcmp() does crash. That's the trade-off for a convenient API.
By the way, I think that it's fine for some cases to log exceptions with
sys.unraisablehook
. Like error in a callback that a function doesn't call directly, and the error cannot be reported to the function since there is an API which abstract the callback. For example, a weakref can have a callback which fail. When a Python object is finalized, errors in weakref callbacks cannot be reported to the finalizer: the finalizer doesn't know these callbacks nor how to handle these errors.Examples which cannot fail:
int _Py_popcount32(uint32_t x)
Py_hash_t PyHash_Pointer(const void *ptr)
void Py_SET_REFCNT(PyObject *ob, Py_ssize_t refcnt)
void PyErr_SetObject(PyObject *, PyObject *)
Counter-examples of functions which cannot report where they can fail:
void Py_SetRecursionLimit(int)
:sys.setrecusionlimit()
adds an additional check which cannot be implemented in the C functionvoid PyFrame_FastToLocals(PyFrameObject *)
:int PyFrame_FastToLocalsWithError(PyFrameObject *f)
had to be added laterPyObject* PyDict_GetItem(PyObject *op, PyObject *key)
: ignore silently errors :-(PyDict_GetItemRef()
and other functions were added to report errors to the caller.Corner cases:
PyTypeObject.tp_dealloc
functions: if they raise an exception, they must log it withPyErr_FormatUnraisable()
which callssys.unraisablehook
.void PyOS_AfterFork_Child(void)
: it's hard to report errors around afork()
call.void PyObject_ClearWeakRefs(PyObject *)
void Py_ReprLeave(PyObject *)
, butint Py_ReprEnter(PyObject *)
can failThe text was updated successfully, but these errors were encountered: