You are confusing concepts. Look again at your code... are you offsetting MPI_Type vector? No! You are offsetting the buffer, not the MPI datatype.
So the solution is to figure out how to offset the buffer in a language like Python with NumPy arrays and a different memory model, where you don't just get raw memory addresses with the C "&" operator like in "&mat[0][1]".
Additionally, you also be very specific about what you are trying to communicate, otherwise mpi4py may error out of overzealous error checking. Long story short, here you have the code:
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
n = 5
mat = np.arange(n*n, dtype='i').reshape(n, n).astype('d')
mpi_column = MPI.DOUBLE.Create_vector(
count=n,
blocklength=1,
stride=n
).Commit()
idx = 1 # column index in range(0, n)
buf = (mat[0, idx:idx+1], 1, mpi_column) # (buf, count, datatype)
col = np.zeros(n)
MPI.COMM_WORLD.Sendrecv(
sendbuf=buf, dest=rank,
recvbuf=col, source=rank,
)
print(mat)
print(col)