, therefore everywhere in the codebase there are uint8_t* and uint16_t* buffers to handle each of the 3 planes (Y, U, V).
The trick comes when switching between these two kind of buffers, according to the bit depth, using 2 Macros.
#define CONVERT_TO_SHORTPTR(x) ((uint16_t *)(((uintptr_t)(x)) << 1))
#define CONVERT_TO_BYTEPTR(x) ((uint8_t *)(((uintptr_t)(x)) >> 1))
The x parameter is the address of the buffer and CONVERT_TO_SHORTPTR is used to pass from an uint8_t* address to an uint16_t* address and CONVERT_TO_BYTEPTR vice-versa.
The thing is that I understand what those 2 macros do from the coding language point of view, but not why.
For example, CONVERT_TO_SHORTPTR return a total different address higher than the previous one.
From my point of view, it implies that it is always a valid address, allocated in advance. Moreover, the choice of using them is not so clear for me even after reading the commit messages of that part of code and "digging" a bit the AOM's code.
Could someone explain me the expected behaviour when using those macros and the reasoning behind the choice of using them?
I know there's a bit of time since the implementation was done, but I cross the fingers that someone still remembers.
Best regards,
Marian Aldescu