This PR is PART of Hackathon Project 8: update STIR data to CUDA managed pointers, then adapt sirf.STIR data-containers and expose the underlying CUDA managed pointer to Python. It is a continuation of the earlier image stage of the same task. The Stage 1 STIR image part has already been merged upstream in UCL/STIR#1693, while the corresponding Stage 1 SIRF image part remains open in #1380 and is not yet merged.
This new PR contains the SIRF continuation for Stage 1 / Stage 2 / Stage 3 together: it keeps the CUDA-managed image queries and Python exposure from the earlier image stage, extends the same path to acquisition data, and propagates the CUDA-enabled cSTIR build path required for the end-to-end managed-memory reconstruction proof. The companion STIR continuation is in the fork under Dimitra-Kyriakopoulou/STIR on branch Hackathon_project08_II. The runnable benchmark/proof repository for the whole Project 8 path, including Stage 1, Stage 2, and Stage 3, is Dimitra-Kyriakopoulou/hackathon_project_08_stir_cuda_managed_pointers.
Please read and adhere to the contribution guidelines.
Please tick the following:
https://github.com/SyneRBI/SIRF/pull/1381
(6 files)
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@KrisThielemans requested changes on this pull request.
I think we need to move some things to DataContainer, in particular . Not so sure why we didn't do that yet with address() itself.
I suggest to rename is_cuda_managed to supports_cuda_array_view to be consistent with existing code. We should have a virtual DataContainer version that just returns false.
I'm not so sure we need cuda_address() as it's really the same as address() but with a check. If we do have it, it should be in DataContainer (calling virtual DataContainer::address()), and it shouldn't be anywhere else.
In .github/workflows/build-test.yml:
> @@ -2,7 +2,7 @@ name: Build-test CI on: push: - branches: [ master ] + branches: [ master, Hackathon_project8_II ]
will need to undo
In .github/workflows/build-test.yml:
> @@ -17,6 +17,7 @@ on:
- NOTICE.txt
- .mailmap
- CITATION.cff
+ workflow_dispatch:
not used, so remove?
In .github/workflows/build-test.yml:
> @@ -102,7 +103,7 @@ jobs:
export PATH="/usr/lib/ccache:/usr/local/opt/ccache/libexec:$PATH"
cmake --version
#echo "cmake flags ${{ matrix.CMAKE_BUILD_TYPE }} ${{ matrix.EXTRA_BUILD_FLAGS }}"
- BUILD_FLAGS="-DCMAKE_BUILD_TYPE=${{ matrix.CMAKE_BUILD_TYPE }} -DUSE_SYSTEM_ACE=ON -DUSE_SYSTEM_Armadillo=ON -DUSE_SYSTEM_Boost=ON -DUSE_SYSTEM_FFTW3=ON -DUSE_SYSTEM_HDF5=ON -DBUILD_siemens_to_ismrmrd=ON -DUSE_SYSTEM_SWIG=ON -DCMAKE_INSTALL_PREFIX=~/install -DPYVER=3";
+ BUILD_FLAGS="-DCMAKE_BUILD_TYPE=${{ matrix.CMAKE_BUILD_TYPE }} -DUSE_SYSTEM_ACE=ON -DUSE_SYSTEM_Armadillo=ON -DUSE_SYSTEM_Boost=ON -DUSE_SYSTEM_FFTW3=ON -DUSE_SYSTEM_HDF5=ON -DBUILD_siemens_to_ismrmrd=ON -DUSE_SYSTEM_SWIG=ON -DCMAKE_INSTALL_PREFIX=~/install -DPYVER=3 -DDISABLE_Gadgetron=ON -DDISABLE_Registration=ON -DDISABLE_Synergistic=ON";
will need to undo
In src/xSTIR/cSTIR/include/sirf/STIR/stir_data_containers.h:
> + const auto iter = pd_ptr->begin_all(); + if (iter == pd_ptr->end_all()) + return 0; + return reinterpret_cast<size_t>(&*iter);
Why did you do this? The original line leaves all this to STIR.
In src/xSTIR/cSTIR/CMakeLists.txt:
> @@ -42,6 +42,11 @@ target_include_directories(cstir PUBLIC
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>$<INSTALL_INTERFACE:include>")
target_include_directories(cstir PUBLIC "${STIR_INCLUDE_DIRS}")
+if (STIR_WITH_CUDA)
we'll need a HAS_CUDA_RUNTIME_API instead (see below).
In src/xSTIR/cSTIR/stir_data_containers.cpp:
> @@ -25,12 +25,50 @@ limitations under the License. #include "stir/zoom.h" #include "stir/CartesianCoordinate3D.h" #include "stir/numerics/norm.h" +#ifdef STIR_WITH_CUDA +#if defined(__has_include) +#if __has_include(<cuda_runtime_api.h>) +#include <cuda_runtime_api.h> +#define SIRF_HAS_CUDA_RUNTIME_API 1 +#endif +#else +#include <cuda_runtime_api.h> +#define SIRF_HAS_CUDA_RUNTIME_API 1 +#endif +#endif
we should define HAS_CUDA_RUNTIME_API at the CMake stage
In src/xSTIR/cSTIR/stir_data_containers.cpp:
>
using namespace stir;
using namespace sirf;
#define SIRF_DYNAMIC_CAST(T, X, Y) T& X = dynamic_cast<T&>(Y)
+namespace {
+
+bool
+pointer_is_cuda_managed(const void* ptr)
this function is generic (i.e. not STIR dependent), so should be moved to SIRF/Common.h
In src/xSTIR/cSTIR/stir_data_containers.cpp:
>
using namespace stir;
using namespace sirf;
#define SIRF_DYNAMIC_CAST(T, X, Y) T& X = dynamic_cast<T&>(Y)
+namespace {
+
+bool
+pointer_is_cuda_managed(const void* ptr)
+{
+#if defined(STIR_WITH_CUDA) && defined(SIRF_HAS_CUDA_RUNTIME_API)
+ if (ptr == nullptr)
+ return false;
+ cudaPointerAttributes attrs;
+ const cudaError_t err = cudaPointerGetAttributes(&attrs, ptr);
+ if (err != cudaSuccess) {
+ cudaGetLastError();
I guess this needs to be handled differently.
In src/xSTIR/cSTIR/stir_data_containers.cpp:
> +#if defined(STIR_WITH_CUDA) && defined(SIRF_HAS_CUDA_RUNTIME_API)
+ if (ptr == nullptr)
+ return false;
+ cudaPointerAttributes attrs;
+ const cudaError_t err = cudaPointerGetAttributes(&attrs, ptr);
+ if (err != cudaSuccess) {
+ cudaGetLastError();
+ return false;
+ }
+#if CUDART_VERSION >= 10000
+ return attrs.type == cudaMemoryTypeManaged;
+#else
+ return attrs.memoryType == cudaMemoryTypeManaged && attrs.isManaged;
+#endif
+#else
+ (void)ptr;
remove this line
> + return bool(parms.int_par(self.handle, 'ImageData', 'is_cuda_managed'))
+
+ @property
+ def cuda_address(self):
+ """Returns the CUDA-visible image pointer for managed-memory-backed images."""
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
+ return parms.size_t_par(self.handle, 'ImageData', 'cuda_address')
+
+ @property
+ def __cuda_array_interface__(self):
+ """As per https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html"""
+ if not self.supports_array_view:
+ raise ContiguousError("views not supported, please consider using `asarray()` or `as_array()`")
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
Not sure what type of error we should be raising to confirm to relevant standards.
> @@ -478,6 +478,28 @@ def __array_interface__(self):
return {'shape': self.shape, 'typestr': '<f4', 'version': 3,
'data': (parms.size_t_par(self.handle, 'ImageData', 'address'), False)}
+ @property
+ def is_cuda_managed(self):
+ """True if the backing image storage is CUDA managed memory."""
+ return bool(parms.int_par(self.handle, 'ImageData', 'is_cuda_managed'))
+
+ @property
+ def cuda_address(self):
+ """Returns the CUDA-visible image pointer for managed-memory-backed images."""
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
+ return parms.size_t_par(self.handle, 'ImageData', 'cuda_address')
+
+ @property
+ def __cuda_array_interface__(self):
+ """As per https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html"""
apparently that site is deprecated.
⬇️ Suggested change- """As per https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html""" + """As per https://nvidia.github.io/numba-cuda/user/cuda_array_interface.html"""
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Dear Professor @KrisThielemans,
THANK YOU WHOLEHEARTEDLY!!! I have just closed SyneRBI/SIRF#1380, and I will now work on your comments.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In .github/workflows/build-test.yml:
> @@ -2,7 +2,7 @@ name: Build-test CI on: push: - branches: [ master ] + branches: [ master, Hackathon_project8_II ]
Undone.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
> @@ -17,6 +17,7 @@ on:
- NOTICE.txt
- .mailmap
- CITATION.cff
+ workflow_dispatch:
Removed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
> @@ -102,7 +103,7 @@ jobs:
export PATH="/usr/lib/ccache:/usr/local/opt/ccache/libexec:$PATH"
cmake --version
#echo "cmake flags ${{ matrix.CMAKE_BUILD_TYPE }} ${{ matrix.EXTRA_BUILD_FLAGS }}"
- BUILD_FLAGS="-DCMAKE_BUILD_TYPE=${{ matrix.CMAKE_BUILD_TYPE }} -DUSE_SYSTEM_ACE=ON -DUSE_SYSTEM_Armadillo=ON -DUSE_SYSTEM_Boost=ON -DUSE_SYSTEM_FFTW3=ON -DUSE_SYSTEM_HDF5=ON -DBUILD_siemens_to_ismrmrd=ON -DUSE_SYSTEM_SWIG=ON -DCMAKE_INSTALL_PREFIX=~/install -DPYVER=3";
+ BUILD_FLAGS="-DCMAKE_BUILD_TYPE=${{ matrix.CMAKE_BUILD_TYPE }} -DUSE_SYSTEM_ACE=ON -DUSE_SYSTEM_Armadillo=ON -DUSE_SYSTEM_Boost=ON -DUSE_SYSTEM_FFTW3=ON -DUSE_SYSTEM_HDF5=ON -DBUILD_siemens_to_ismrmrd=ON -DUSE_SYSTEM_SWIG=ON -DCMAKE_INSTALL_PREFIX=~/install -DPYVER=3 -DDISABLE_Gadgetron=ON -DDISABLE_Registration=ON -DDISABLE_Synergistic=ON";
Undone.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/xSTIR/cSTIR/include/sirf/STIR/stir_data_containers.h:
> + const auto iter = pd_ptr->begin_all(); + if (iter == pd_ptr->end_all()) + return 0; + return reinterpret_cast<size_t>(&*iter);
Restored to STIR’s get_const_data_ptr() so this now defers back to STIR there.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/xSTIR/cSTIR/CMakeLists.txt:
> @@ -42,6 +42,11 @@ target_include_directories(cstir PUBLIC
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}>$<INSTALL_INTERFACE:include>")
target_include_directories(cstir PUBLIC "${STIR_INCLUDE_DIRS}")
+if (STIR_WITH_CUDA)
Changed to a CMake-stage HAS_CUDA_RUNTIME_API check.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/xSTIR/cSTIR/stir_data_containers.cpp:
> @@ -25,12 +25,50 @@ limitations under the License. #include "stir/zoom.h" #include "stir/CartesianCoordinate3D.h" #include "stir/numerics/norm.h" +#ifdef STIR_WITH_CUDA +#if defined(__has_include) +#if __has_include(<cuda_runtime_api.h>) +#include <cuda_runtime_api.h> +#define SIRF_HAS_CUDA_RUNTIME_API 1 +#endif +#else +#include <cuda_runtime_api.h> +#define SIRF_HAS_CUDA_RUNTIME_API 1 +#endif +#endif
Done: HAS_CUDA_RUNTIME_API is now defined from CMake, not source probing.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
>
using namespace stir;
using namespace sirf;
#define SIRF_DYNAMIC_CAST(T, X, Y) T& X = dynamic_cast<T&>(Y)
+namespace {
+
+bool
+pointer_is_cuda_managed(const void* ptr)
Moved to new src/common/include/sirf/common/Common.h.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
>
using namespace stir;
using namespace sirf;
#define SIRF_DYNAMIC_CAST(T, X, Y) T& X = dynamic_cast<T&>(Y)
+namespace {
+
+bool
+pointer_is_cuda_managed(const void* ptr)
+{
+#if defined(STIR_WITH_CUDA) && defined(SIRF_HAS_CUDA_RUNTIME_API)
+ if (ptr == nullptr)
+ return false;
+ cudaPointerAttributes attrs;
+ const cudaError_t err = cudaPointerGetAttributes(&attrs, ptr);
+ if (err != cudaSuccess) {
+ cudaGetLastError();
I removed the explicit cudaGetLastError() clearing and now treat non-success from cudaPointerGetAttributes() as "no CUDA array view".
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
> +#if defined(STIR_WITH_CUDA) && defined(SIRF_HAS_CUDA_RUNTIME_API)
+ if (ptr == nullptr)
+ return false;
+ cudaPointerAttributes attrs;
+ const cudaError_t err = cudaPointerGetAttributes(&attrs, ptr);
+ if (err != cudaSuccess) {
+ cudaGetLastError();
+ return false;
+ }
+#if CUDART_VERSION >= 10000
+ return attrs.type == cudaMemoryTypeManaged;
+#else
+ return attrs.memoryType == cudaMemoryTypeManaged && attrs.isManaged;
+#endif
+#else
+ (void)ptr;
Removed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
> + return bool(parms.int_par(self.handle, 'ImageData', 'is_cuda_managed'))
+
+ @property
+ def cuda_address(self):
+ """Returns the CUDA-visible image pointer for managed-memory-backed images."""
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
+ return parms.size_t_par(self.handle, 'ImageData', 'cuda_address')
+
+ @property
+ def __cuda_array_interface__(self):
+ """As per https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html"""
+ if not self.supports_array_view:
+ raise ContiguousError("views not supported, please consider using `asarray()` or `as_array()`")
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
Changed this to raise AttributeError when unsupported, so the protocol behaves like a missing attribute.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Dear Professor @KrisThielemans ,
THANK YOU WHOLEHEARTEDLY FOR YOUR REVIEW!!!
I think we need to move some things to
DataContainer, in particular . Not so sure why we didn't do that yet withaddress()itself.I suggest to rename
is_cuda_managedtosupports_cuda_array_viewto be consistent with existing code. We should have a virtualDataContainerversion that just returnsfalse.I'm not so sure we need
cuda_address()as it's really the same asaddress()but with a check. If we do have it, it should be inDataContainer(calling virtualDataContainer::address()), and it shouldn't be anywhere else.
I moved the CUDA-array-view support to the generic DataContainer layer, including the common C++/C/Python path, and renamed it to supports_cuda_array_view. cuda_address() is now only provided generically via DataContainer::address().
2026 (for you), and added Copyright 2026 Biomedical Research Foundation, Academy of Athens (for me).Common.h; I did not add author lines to the Python files or to older C/C++ files that historically had none.Please inform me if any of the changes are not OK, and i will very gladly attempt to correct them.
THANK YOU WHOLEHEARTEDLY!!!
dimitra
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@KrisThielemans requested changes on this pull request.
Thanks!
Let's move the pointer functions to common/utilities.h (sorry).
Also, all address() functions in the derived classes will now need override. And add something to CHANGES.md along the lines of "add supports_cuda_array_view() and related functions to expose CUDA arrays directly to Python for DataContainers that support it."
In src/xSTIR/cSTIR/stir_data_containers.cpp:
> @@ -587,6 +589,18 @@ STIRImageData::set_data(const float* data)
//std::copy(data, data + n, image.begin_all());
}
+bool
+STIRImageData::supports_cuda_array_view() const
+{
+ return this->supports_array_view() && pointer_supports_cuda_array_view(_data->get_const_full_data_ptr());
+}
+
+bool
+STIRAcquisitionDataInMemory::supports_cuda_array_view() const
+{
+ return this->supports_array_view() && pointer_supports_cuda_array_view(reinterpret_cast<const void*>(this->address()));
I think this is a generic implementation that can be used in DataContainer, and we then won't need it anywhere else
In src/xSTIR/cSTIR/CMakeLists.txt:
> +if (TARGET CUDA::cudart)
+ get_target_property(_cudart_include_dirs CUDA::cudart INTERFACE_INCLUDE_DIRECTORIES)
+ set(_saved_required_includes "${CMAKE_REQUIRED_INCLUDES}")
+ if (_cudart_include_dirs)
+ set(CMAKE_REQUIRED_INCLUDES ${_cudart_include_dirs})
+ endif()
+ check_include_file_cxx(cuda_runtime_api.h HAS_CUDA_RUNTIME_API)
+ set(CMAKE_REQUIRED_INCLUDES "${_saved_required_includes}")
+ if (HAS_CUDA_RUNTIME_API)
+ target_compile_definitions(cstir PUBLIC HAS_CUDA_RUNTIME_API)
+ target_link_libraries(cstir CUDA::cudart)
+ endif()
+endif()
@casperdcl are these checks needed? i.e. if the target exists, aren't we guaranteed to have cuda_runtime_api.h?
In src/common/include/sirf/common/Common.h:
> +You may obtain a copy of the License at +http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +*/ + +/*! +\file +\ingroup Common + +\author Dimitra Kyriakopoulou +\author Kris Thielemans
I didn't write any of this, so remove me!
> @@ -478,6 +478,28 @@ def __array_interface__(self):
return {'shape': self.shape, 'typestr': '<f4', 'version': 3,
'data': (parms.size_t_par(self.handle, 'ImageData', 'address'), False)}
+ @property
+ def is_cuda_managed(self):
+ """True if the backing image storage is CUDA managed memory."""
+ return bool(parms.int_par(self.handle, 'ImageData', 'is_cuda_managed'))
+
+ @property
+ def cuda_address(self):
+ """Returns the CUDA-visible image pointer for managed-memory-backed images."""
+ if not self.is_cuda_managed:
+ raise error('image data is not backed by CUDA managed memory')
+ return parms.size_t_par(self.handle, 'ImageData', 'cuda_address')
+
+ @property
+ def __cuda_array_interface__(self):
+ """As per https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html"""
resolved, but not changed?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/xSTIR/cSTIR/stir_data_containers.cpp:
> @@ -587,6 +589,18 @@ STIRImageData::set_data(const float* data)
//std::copy(data, data + n, image.begin_all());
}
+bool
+STIRImageData::supports_cuda_array_view() const
+{
+ return this->supports_array_view() && pointer_supports_cuda_array_view(_data->get_const_full_data_ptr());
+}
+
+bool
+STIRAcquisitionDataInMemory::supports_cuda_array_view() const
+{
+ return this->supports_array_view() && pointer_supports_cuda_array_view(reinterpret_cast<const void*>(this->address()));
Done: moved to the generic DataContainer/common path and removed the STIR-specific implementation.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/xSTIR/cSTIR/CMakeLists.txt:
> +if (TARGET CUDA::cudart)
+ get_target_property(_cudart_include_dirs CUDA::cudart INTERFACE_INCLUDE_DIRECTORIES)
+ set(_saved_required_includes "${CMAKE_REQUIRED_INCLUDES}")
+ if (_cudart_include_dirs)
+ set(CMAKE_REQUIRED_INCLUDES ${_cudart_include_dirs})
+ endif()
+ check_include_file_cxx(cuda_runtime_api.h HAS_CUDA_RUNTIME_API)
+ set(CMAKE_REQUIRED_INCLUDES "${_saved_required_includes}")
+ if (HAS_CUDA_RUNTIME_API)
+ target_compile_definitions(cstir PUBLIC HAS_CUDA_RUNTIME_API)
+ target_link_libraries(cstir CUDA::cudart)
+ endif()
+endif()
I simplified this by dropping the extra header check and using TARGET CUDA::cudart directly.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou commented on this pull request.
In src/common/include/sirf/common/Common.h:
> +You may obtain a copy of the License at +http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. + +*/ + +/*! +\file +\ingroup Common + +\author Dimitra Kyriakopoulou +\author Kris Thielemans
Because your guidance raises points I would not have thought of on my own, I experience this as a collaborative effort and for that reason I regard you as a coauthor.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
Dear Professor @KrisThielemans ,
THANK YOU WHOLEHEARTEDLY!!!
Let's move the pointer functions to common/utilities.h (sorry).
Moved to src/common/include/sirf/common/utilities.h.
all address() functions in the derived classes will now need override
Done.
add something to CHANGES.md ...
Added.
resolved, but not changed?
This had already been changed; the current code raises AttributeError when unsupported, and the thread appears to be attached to an outdated diff.
I will be looking forward to your further evaluation!
THANK YOU WHOLEHEARTEDLY FOR ALL YOUR INVALUABLE HELP!!!
Dimitra
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()
@Dimitra-Kyriakopoulou pushed 1 commit.
—
View it on GitHub or unsubscribe.
You are receiving this because you are subscribed to this thread.![]()