In one of my projects I provide the following functionality:
```
def metric(a, b):
...
return result
def apply(list_a, list_b, metric):
for a in list_a:
for b in list_b:
metric(a, b)
```
since these constant type conversions become very slow and in many cases more optimized algorithms can be used when working on multiple data elements I match the passed metric to check, whether I can directly use a C/C++ implementation:
```
def c_func(metric):
if metric is metric1:
# use c imple of metric 1
if metric is metricN:
# use c imple of metric 2
else
# call Python function
```
This is a lot faster, but has a couple of disadvantages:
1) the apply function needs to be updated when new metrics are added
2) it does not allow other modules to provide efficient metrics that the apply function does not know
Now I am searching for a better way to pass these functions pointers alongside the functions. So far I had the following ideas:
1) replace the functions with extension types using __call__ for the normal function call
```
cdef class Metric:
cdef functptr
@staticmethod
def __call__(a, b):
return result
metric = Metric()
```
2) add an extension types into the dict of the metric function
```
def metric(a, b):
return result
cdef MetricCallback cb = MetricCallback()
cb.funcptr = funcptr
metric.__ModuleName_MetricCallback = cb
```
Currently I am leaning towards the second version for the following reasons:
1) it is probably simpler to add in a third party library, since it does not change the type of the metric function
2) if in the future I decide, that for other functions more callbacks are needed I can simply add more objects into the dict without breaking backwards compatibility
Are there any advantages/disadvantages of these approaches I am missing? Are there better ways to achieve this, or even other implementations already making use of similar concepts?