flakae zimone derryc

0 views
Skip to first unread message

Tina Popielarczyk

unread,
Aug 2, 2024, 10:25:07 PM8/2/24
to bullservmilea

My question is: what should be the preferred data type I should use to assign beta?
I know that a 3-dimensional matrix type, Matrx[X,Y,Z] beta, does not exist in Stan. Hence I would like to seek advice on which data type I should assign for beta to facilitate what I am trying to do. Thanks.

This is what I thought at first. Unfortunately, this turns out to be wrong for what I am doing. My model is a hierarchical binomial or multinomial regression. I had a version of the code that has a for-loop over every observation in the model block. It worked fine when the number of subjects was small (

Maybe I just could not understand the manual. I know I could declare an array of vectors and matrices. I know I could sum multiple vectors together.
But the manual did not specify how one may sum a bunch of vectors that are part of an array.

The site is secure.
The ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Background/aim: Three-dimensional kinematic measures of gait are routinely used in clinical gait analysis and provide a key outcome measure for gait research and clinical practice. This systematic review identifies and evaluates current evidence for the inter-session and inter-assessor reliability of three-dimensional kinematic gait analysis (3DGA) data.

Method: A targeted search strategy identified reports that fulfilled the search criteria. The quality of full-text reports were tabulated and evaluated for quality using a customised critical appraisal tool.

Results: Fifteen full manuscripts and eight abstracts were included. Studies addressed both within-assessor and between-assessor reliability, with most examining healthy adults. Four full-text reports evaluated reliability in people with gait pathologies. The highest reliability indices occurred in the hip and knee in the sagittal plane, with lowest errors in pelvic rotation and obliquity and hip abduction. Lowest reliability and highest error frequently occurred in the hip and knee transverse plane. Methodological quality varied, with key limitations in sample descriptions and strategies for statistical analysis. Reported reliability indices and error magnitudes varied across gait variables and studies. Most studies providing estimates of data error reported values (S.D. or S.E.) of less than 5 degrees , with the exception of hip and knee rotation.

Conclusion: This review provides evidence that clinically acceptable errors are possible in gait analysis. Variability between studies, however, suggests that they are not always achieved.

I am now trying to understand why is this happening and how to fix it. With my previous printer, there was a calibration test model that one would print and input the measurements back to the software, and this would basically fix dimensional issues, shrinkage, etc.

I find it strange that there is no dimensional calibration in the prusa slicer, even when printing injection moulding parts they will adjust dimmensions just for plastic shrinkage, shouldn't prusa slicer do the same?

My printer is VERY accurate, it comes up with the same deviation on every single print. Therefore, the way that the previous printer worked, in which a set of prints were measured and re-entered in the slicer, I think it is reasonable.

Underlying reason that X & Y are more likely to be dimensionally off compared to Z is the mechanics of the axes. Z distances is controlled by stepper motor angle changes rotating a screw mechanism whereas X and Y depend on the diameter of the drive pulley. It is easer to machine a screw with an accurate twist/distance than getting the diameter of the drive pulley exactly right.

Pulleys supplied with both my Prusa's are slightly smaller diameter than would move X and Y the desired distance. It is a small error - about 0.5 to 1% too little diameter. There are some other pulleys I have found with more accurate diameters, but later examples of those obtained by other users were reported to have eccentricity issues.

The intent of this question is to provide a reference about how to correctly allocate multi-dimensional arrays dynamically in C. This is a topic often misunderstood and poorly explained even in some C programming books. Therefore even seasoned C programmers struggle to get it right.

However, several high rep users on SO now tell me that this is wrong and bad practice. They say that pointer-to-pointers are not arrays, that I am not actually allocating arrays and that my code is needlessly slow.

You can also apply the same kind of pointer arithmetic on n-dimensional arrays as on plain one-dimensional arrays. With a regular one-dimensional arrays, applying pointer arithmetic should be trivial:

Again there was array decay. The variable arr which was of type int [2][3] decayed into a pointer to the first element. The first element was an int [3] and a pointer to such an element is declared as int(*)[3] - an array pointer.

Contiguous allocation is also the reason why other similar standard library functions like memset, strcpy, bsearch and qsort work. They are designed to work on arrays allocated contiguously. So if you have a multi-dimensional array, you can efficiently search it and sort it with bsearch and qsort, saving you the fuss of implementing binary search and quick sort yourself and thereby re-inventing the wheel for every project.

Now to get back to the code in the question, which used a different syntax with a pointer-to-pointer. There is nothing mysterious about it. It is a pointer to pointer to type, no more no less. It is not an array. It is not a 2D array. Strictly speaking, it cannot be used to point at an array, nor can it be used to point at a 2D array.

A pointer-to-pointer can however be used to point at the first element of an array of pointers, instead of pointing at the array as whole. And that is how it is used in the question - as a way to "emulate" an array pointer. In the question, it is used to point at an array of 2 pointers. And then each of the 2 pointers is used to point at an array of 3 integers.

This is known as a look-up table, which is a kind of abstract data type (ADT), which is something different from the lower level concept of plain arrays. The main difference is how the look-up table is allocated:

The 32 bit addresses in this example are made-up. The 0x12340000 box represents the pointer-to-pointer. It contains an address 0x12340000 to the first item in an array of pointers. Each pointer in that array in turn, contains an address pointing at the first item in an array of integers.

The look-up table is scattered all over the heap memory. It is not contiguously allocated memory in adjacent cells, because each call to malloc() gives a new memory area, not necessarily located adjacently to the others. This in turn gives us lots of problems:

We can't use the sizeof operator. Used on the pointer-to-pointer, it would give us the size of a pointer-to-pointer. Used to the first item pointed at, it would give us the size of a pointer. Neither of them is the size of an array.

We can't use standard library functions that excepts an array type (memcpy, memset, strcpy, bsearch, qsort and so on). All such functions assume to get arrays as input, with data allocated contiguously. Calling them with our look-up table as parameter would result in undefined behavior bugs, such as program crashes.

Since the memory is scattered, the CPU cannot utilize cache memory when iterating through the look-up table. Efficient use of the data cache requires a contiguous chunk of memory which is iterated through from top to bottom. This means that the look-up table, by design, has significantly slower access time than a real multi-dimensional array.

For each call to malloc(), the library code managing the heap has to calculate where there is free space. Similarly for each call to free(), there is overhead code which has to be executed. Therefore, as few calls to these functions as possible is often preferable, for the sake of performance.

As we can see, there are a lot of problems with pointer-based look-up tables. But they aren't all bad, it is a tool like any other. It just has to be used for the right purpose. If you are looking for a multi-dimensional array, which should be used as an array, look-up tables are clearly the wrong tool. But they can be used for other purposes.

A look-up table is the right choice when you need all dimensions to have completely variable sizes, individually. Such a container can be handy when for example creating a list of C strings. It is then often justified to take the above mentioned execution speed performance loss in order to save memory.

Also, the look-up table has the advantage that you can re-alloce parts of the table in run-time without the need to re-allocate a whole multi-dimensional array. If this is something that needs to be done frequently, the look-up table might even outperform the multi-dimensional array in terms of execution speed. For example, similar look-up tables can be used when implementing a chained hash table.

The easiest form in modern C is to simply use a variable-length array (VLA). int array[x][y]; where x and y are variables given values in run-time, prior array declaration. However, VLAs have local scope and do not persist throughout the duration of the program - they have automatic storage duration. So while VLAs may be convenient and fast to use for temporary arrays, it is not an universal replacement to the look-up table in the question.

In modern C, you would use array pointers to a VLA. You can use such pointers even when no actual VLA is present in the program. The benefit of using them over a plain type* or a void* is increased type-safety. Using a pointer to a VLA also allows you to pass the array dimensions as parameters to the function using the array, making it both variable and type safe at once.

c01484d022
Reply all
Reply to author
Forward
0 new messages