Roberta Murgo Thaise (born May 8, 1987), is a Brazilian model, who has posed for numerous publications including Maxim and the magazine Beauty is Divine.[2] In October 2006 she appeared on the cover of the German magazine Matador.[3]
After the move to São Paulo, Murgo signed with the modeling firm Agency One. She soon found work doing various advertisements including posing for many companies and occasional bikini modeling. However, she never competed in any catwalk competitions, believing she was too short.
A month and a half after arriving in São Paulo, while living with 10 other models in an apartment, Murgo met photographer George Bishop. Bishop was working for the major media outlet Grupo Abril and helped Murgo set up a meeting with Playboy.[1] In August 2005 she made her Playboy debut, appearing in the 30th Year Commemorative Edition of Brazilian Playboy, with Grazielli Massafera as the cover model.[4]That same year, Murgo was invited back to make a second appearance in Maxim, this time in the December issue.
Roberta Gambine Moreira (born 7 December 1964) is a Brazilian fashion model, actress and television personality. She is constantly mentioned in the media as one of the greatest Brazilian icons and one of the main sex symbols in the country between the 1980s and 1990s, in addition to being a pioneer of transfeminism in her native country.[1][2][3]
After debuting as a star at Carnaval in 1980, Close gained notoriety as the main character of the clip for the song Dá Um Close Nela, by Erasmo Carlos, in 1984. In the video, which achieved great commercial success after the release on Fantástico, she plays a transvestite who attracts male gazes as she walks through the streets of Rio de Janeiro.[4][5][6] The same year, she became the first transgender model to appear in Playboy magazine, in a record-selling issue upon launch. Later, Roberta Close has appeared on the catwalk for numerous fashion houses, including Thierry Mugler, Guy Laroche, Jean Paul Gaultier . She also been featured in editorials for Vogue and wrote a memoir called Muito Prazer, Roberta Close (1997).[7] [8]
The biopsychosocial model is a modern humanistic and holistic view of the human being in health sciences. Currently, many researchers think the biopsychosocial model should be expanded to include the spiritual dimension as well. However, "spiritual" is an open and fluid concept, and it can refer to many different things. This paper intends to explore the spiritual dimension in all its meanings: the spirituality-and-health relationship; spiritual-religious coping; the spirituality of the physician affecting his/her practice; spiritual support for inpatients; spiritual complementary therapies; and spiritual anomalous phenomena. In order to ascertain whether physicians would be willing to embrace them all in practice, each phrase from the Physician's Pledge on the Declaration of Geneva (World Medical Association) was "translated" in this paper to its spiritual equivalent. Medical practice involves a continuous process of revisions of applied concepts, but a true paradigm shift will occur only when the human spiritual dimension is fully understood and incorporated into health care. Then, one will be able to cut stereotypes and use the term "biopsychosocial-spiritual model" correctly. A sincere and profound application of this new view of the human being would bring remarkable transformations to the concepts of health, disease, treatments, and cure.
Given this scenario, there is an urgent need for disease stratification tools upon hospital admission, to allow early identification of risk of death in COVID-19 patients, assisting in the management of disease and optimizing resource allocation, hopefully assisting to save lives. Although several models and scores, based on traditional statistical methods and/or artificial intelligence (AI), have been proposed3,4,5,6,7, the majority present methodological flaws and technological limitations. Indeed the majority of previous studies: (i) rely on limited sample sizes; (ii) lack consideration of covariate correlations (between the probability of the prediction and the accuracy), external validation, or systematic evaluation of multiple models; (iii) use inadequate evaluation metrics, and/or a small number of predictors, and, finally (iv) exploit at most a single model for the prediction task. All these issues mean that effective and reliable prognostic prediction models are still in need3,8,9.
Although an increasing number of prediction models have been proposed for the early assessment of the prognosis of COVID-19 patients, there is still a lot of work that needs to be done at the level of COVID-19 prognostic models design.
Overall, the majority of the developed models are limited by methodological bias, for example, with the absence of external validation in 72.89%, thus the assessment of accuracy in such studies may be overestimated. Less than a quarter (around 21.49%) reported having followed the methodological recommendations from the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD)11.
To properly assess the performance of different models, it is of utmost importance to use other metrics that consider imbalance issues, such as macro-average F1-score (macro-F1), used in 5.14% of studies. For example, Li et al.14 developed a deep-learning model and a risk-score system based on 55 clinical variables and observed that the most crucial biomarkers distinguishing patients at mortality imminent risk, were age, lactate dehydrogenase, procalcitonin, cardiac troponin, C-reactive protein, and oxygen saturation. The deep-learning model predicted mortality with an AUC of 0.852 and 0.844, for the training and the testing sets respectively, which is considered excellent. However, the performance of the proposed algorithm on training and testing datasets measured by the F1-score dropped to 0.642 and 0.616.
Few studies (Table S1) deeply analyzed the impact of the variables in the final model or on the final model outcome. Notably, Ikemura et al.15 used SHAP-values to analyze feature-importance. Additionally, most studies did not investigate how reliable the made predictions are in terms of the correlation between the probability of the prediction and the accuracy. This analysis has implications on the practical use of this technology. An accurate but unreliable method has its practical applicability diminished. We explicitly tackle these issues in our study.
This is a substudy of the Brazilian COVID-19 Registry, a multi-hospital cohort study previously described in16. All protocols were approved by the National Commission for Research Ethics (CAAE: 30350820.5.1001.0008). The development, validation, and reporting of the models followed guidance from the Transparent Reporting of a Multivariable Prediction Model for Individual Prediction or Diagnosis (TRIPOD) checklist and the Prediction model Risk Of Bias Assessment Tool (PROBAST)11,17.
Trained hospital staff or interns collected medical data using Research Electronic Data Capture (REDCap) tools19. Variables used to develop the models were obtained at hospital presentation. A set of potential predictor features for in-hospital mortality was selected a priori, as recommended, including comorbidities, lifestyle habits, clinical assessment and laboratory data upon hospital admission: age; days from symptom onset; heart and respiratory rate, mechanical ventilation, oxygen inspiration fraction, platelets, urea, C-reactive protein, lactate, gasometry results (pH, pO2, pCO2, bicarbonate), hemogram parameters (hemoglobin, neutrophils, lymphocytes, neutrophils to lymphocytes ratio, platelets) and sodium upon hospital admission. For more details, see Supplementary Material S19,11. We call these patient or base features (we will use both terms interchangeably), to contrast with other meta-features used in the Stacking and derived from population information, as described later.
To ensure data quality, comprehensive data checks were undertaken. Error checking code was developed in R, to identify data entry errors, as previously described9. The results were sent to each center for checking and correction before further data analysis, model development and validation.
In order to push the performance limits of our models, besides the base (patient) features, we experimented with the creation of novel artificial (meta-)features (we call these new features meta-features because they are derived from other base features or from the application of ML models on them), which strive to make classes more separable. We experimented with two main types in the meta-models (ensembles), namely features derived from the population (population-based meta-features) and features derived from the output of classifiers (stacking-based features).
In more detail, for the Stacking (ensemble) models, we used a combination (meta-)model that learns how to better combine class probabilities from other models. The overall idea from this type of meta-feature is that, if classifiers are at least partially independent, for instance, due to sampling or different classification premises (e.g., probabilistic, geometric, etc.), their predictions will more likely be correct for different instances, resulting in an overall combined classification that will be more likely correct. For instance, in a binary classification problem, an ensemble of three completely independent, better-than-random classifiers (i.e. errors are never on the same instances), for any given instance there will be at least two correct decisions. This means that, even with weak learners, such a combined ensemble would potentially be a very strong classifier. The problem lies in how to effectively make the classifiers independent in such a manner, since for all practical purposes, completely independent classifiers are just an abstract concept. We, indeed, can strive to make more independent classifiers, for instance, using different models with different classifications premises, and even using a learnable combination strategy.
aa06259810