Spine 2d Pixel Art

0 views
Skip to first unread message

Abigail Tyrie

unread,
Aug 5, 2024, 3:53:38 AM8/5/24
to arerrena
idon't know what im doing wrong...i think im having brain freeze. I am really struggling with converting my spine objects pixel coordinates to world coordinates. I have recently converted all my code to work with Ashley ecs and i cant seem to get my spine object to display in the correct position. i have a system which handles the rendering and positioning of my spine object but i cant seem to get it displaying in the correct position. I'm hoping someone can point me in the correct direction!

i have included my code for the spine rendering system...hope you can help!i want to place the spine object at the same position as my box 2d object which is using world coordinates. but spine is using pixel coordinates. i have also included an image to show you what is happening. (the grey square near the middle right of the screen is where i want my spine object to be!)


What I do for my spine renders is look at the bounding box size in pixels in spine. This is usually in the order of 100s of pixels. But if you are working with box2d scales, it is recommended that you think of 1 as 1 meter.


Then you might also want to handle an offset for rendering your spine object, as I see you are trying to do, because possibly your root bone is in the center of your spine object (a hip for example). However, you are doing a division operation, which I guess is exploratory as offsets should be an addition or subtraction. Here is how I do it using the spine pixel coordinates (again, sorry for the Kotlin, but I like it):


Have you heard of a method called camera.project(world coordinates); it might do what you are looking for. It takes the world coordinates and turns them into screen coordinates. For the opposite you can do camera.unproject(screen coordinates);


Michael B. Myers Jr. is a designer living in Iowa. He developed covers for the Puffin Pixels Series of Penguin Random House. Here he details his origins as a designer, and how the Puffin Pixels Series came about.


It came out of thin air! I was contacted by an art director who works for the Puffin line of books over at Penguin Random House who said they had seen my pixel art work online and they thought I would be a good fit for the series. I was pretty ecstatic and had a great time working on the covers.


The art director I was working with gave me some initial ideas, and I just ran with that. The great thing about this series is that they are classic stories so the concept and content is there, I just had to illustrate it. The cool twist is that each cover somewhat mimics a classic video game, with items scattered about as well as some character stats and inventory displayed on the back covers.


Hello Community,

I have three x-ray images (exported as JPEG) taken from the same patient. those three images are taken at different positions:1. patient standing erect 2. patient performing left lateral bending 3. patient performing right lateral bending.

I am trying to measure the length of the spine using ImageJ, and I am converting the measurement from pixels to millimeters using the annotated ruler (real-life scale).

ideally, I am supposed to get very similar spine length, however, I am getting about 60-millimeter difference between the erect and the lateral bending and a difference of about 17 mm between the right and the left lateral bending.

please guide me what might be the reason for the discrepancy? what should be fixed?


welcome to the forum and thanks for posting these interesting images. As my radiology lessons are some time ago, I just tried to measure the length of the thoracic spine. I did so by finding the last vertebra where a rib starts in caudal direction. Then, I marked 12 vertebrae in cranial direction:

image1082791 601 KB


When I calculate the length in mm, I get 270mm, 307mm and 307mm for the three images. Thus, between the last two images, the length measurement is reproducible - compared to the first image not. Furthermore, the first image also looks a bit different in contrast, zoom and the labels on the image are different.


No. the patient in the erect image is standing and at a distance of 72 inches from the detector while in both bending images (right and Left) the patient is lying on the table and the detector is at a 52 inches from the patient.


Image acquisition in ultra-high-resolution (UHR) scan mode does not impose a dose penalty in photon-counting CT (PCCT). This study aims to investigate the dose saving potential of using UHR instead of standard-resolution PCCT for lumbar spine imaging.


In PCCT of the lumbar spine, UHR mode's smaller pixel size facilitates a considerable CNR increase over standard-resolution imaging, which can either be used for dose reduction or higher spatial resolution depending on the selected convolution kernel.


Prepare to unleash cinematic terror with FilmLUTs Horror, the ultimate arsenal for color grading your chilling footage. Elevate your storytelling with spine-chilling style, as professionally designed look-up-table cube files bring a new level of suspense to your productions. With FilmLUTs Horror, setting the perfect mood for your narrative is effortless, and every frame becomes an immersive cinematic masterpiece. Welcome to the world of FilmLUTs Horror, where your visuals set the stage for the most haunting tales.


Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.


Cervical spine diseases are recognized as a public health issue, characterized by diversity and high morbidity, which contained mainly cervical spondylosis, malformations, fractures, instability, and spondylolysis1,2. Over a third of a billion people suffered from persistent mechanical neck pain for at least three months, as indicated by a global assessment in 20153. X-ray is a common and cost-effective method to evaluate cervical spine diseases, especially in screening and follow-up4,5. It is imperative for post-operative assessment in Anterior Cervical Corpectomy and Fusion (ACCF), Anterior Cervical Discectomy and Fusion (ACDF), and Anterior Cervical Disc Replacement (ACDR)6.


Quantitative parameters in X-ray imaging serve as the critical content of assessment for cervical spine diseases7. In routine clinical practice, surgeons primarily rely on manual measurements or visual assessments, with disparities in professional expertise contributing to an elevated risk of misdiagnosis and measurement inaccuracies. The results of manual measurements are usually obtained by taking the average of measurements from multiple surgeons. Nevertheless, this time-consuming and labor-intensive method lacks cross-checking8. Thus, failing to reduce the subjective impact of surgeons and unable to mitigate inherent errors associated with manual measurements. Moreover, the vast array of quantitative parameters for cervical spine disease assessment are extremely difficult to be obtained by manual measurement.


Machine learning (ML) can assist and replace manual efforts in performing extensive and precise complex calculations. Nevertheless, ML requires large-scale and high-quality training datasets consisting of raw images and annotated images. Presently, the publicly accessible large X-ray datasets predominantly encompass chest radiographs and fractures, with a portion of the studies incorporating merely classification data, thus lacking annotations requisite for quantitative analysis9,10,11,12,13. Existing datasets of cervical spine X-rays, which amalgamate images of the cervical, thoracic, lumbar, and whole spine14, exhibit considerable variability stemming from the distinct anatomical structures of the vertebral body and their unique physiological and pathological characteristics. Such marked differences in data characteristics significantly limit their suitability for machine learning, as the heterogeneity hampers the consistent application required for effective algorithmic training. Additionally, previous datasets present problems with small sample size, inconsistent image clarity, or are primarily used for reclassification tasks based on existing datasets (instead of creating a new dataset). Evidently, suitable datasets for cervical spine X-ray are scarce. To fill the gap, we developed Cervical Spine X-ray Atlas (CSXA), a dataset specifically and meticulously designed for the application of ML in the realm of cervical spine imaging.


The algorithm based on keypoints addresses the issues of laborious manual processes, measurement errors, lack of cross-checking, and incomplete parameters measurement17. However, quantitative parameters for diagnosing cervical spine diseases are actual distances, while algorithmic outputs are pixel values. A previous study adopted16 the ratio of distances within images due to the challenges in acquiring pixel equivalent. Pixel equivalent18, defined as the ratio of actual distance to pixel distance, plays a crucial role in converting a part of parameters in the study of cervical spine X-ray. It is essential to establish the relationship between pixel and physical dimensions to accurately translate these into actual distances and areas. In this study, we meticulously computed for each image with Python scripts by dividing the pixel values of the scale in each image by the corresponding graduated markings.


The CSXA, algorithm and basic information are open-access with the intention of aiding the research communities in experiment replication and advancing the field of medical imaging in cervical spine (Fig. 1).


The flowchart for creating the CSXA dataset: (1) Image A illustrates the construction of the raw image of cervical spine X-ray. (2) Image B shows the naming of the keypoints, as well as the naming of the raw images and annotated images. (3) Image C depicts the methods of image annotation and cross-checking. (4) Image D is a schematic diagram illustrating the calculation of pixel equivalent. (5) Image E demonstrates the main algorithms used for converting annotated images into quantitative parameters. (6) Image F presents the complete data of the CSXA dataset, including 4963 raw and annotated images, two types of codes, and data about all basic information and quantitative parameters.

3a8082e126
Reply all
Reply to author
Forward
0 new messages