I don't clearly understand your problem. If you want to use the Mamba 3D segmentation tools (watershed transformation), your 3D image must be stored in a Mamba image created with image3DMb. This 3D image is basically a pile of 2D Mamba images. Suppose that your 3D Mamba image is named im3D. Then, you need to extract each slice i of your numpy array (I am not familiar with numpy but I know that this slicing tool exists) and store it in the corresponding slice im3D[i] of the Mamba 3D image. To achieve this, have a look on the example already cited in Nicolas' answer.
Be aware also that the horizontal size of the Mamba image is always a multiple of 64. So, if your array has a different horizontal size, it will be padded with 0. This may induce problems (edge effects) if you don't take care of this.
Once, you have a 3D Mamba image, you can apply on it the 3D watershed transform (valuedWatershed3D). The basinSegment3D or watershedSegment3D operators can not be used directly on the initial image, as they require a labelled flooding source image (generated by label3D). Here again, have a look on the documentation given in the Mamba examples: