The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
During external beam radiotherapy, normal tissues are irradiated along with the tumor. Radiation therapists try to minimize the dose of normal tissues while delivering a high dose to the target volume. Often this is difficult and complications arise due to irradiation of normal tissues. These complications depend not only on the dose but also on volume of the organ irradiated. Lyman has suggested a four-parameter empirical model which can be used to represent normal tissue response under conditions of uniform irradiation to whole and partial volumes as a function of the dose and volume irradiated. In this paper, Lyman's model has been applied to a compilation of clinical tolerance data developed by Emami et al. The four parameters to characterize the tissue response have been determined and graphical representations of the derived probability distributions are presented. The model may, therefore, be used to interpolate clinical data to provide estimated normal tissue complication probabilities for any combination of dose and irradiated volume for the normal tissues and end points considered.
My first post to ask for some assistance. On one of our servers, HPE Smart Storage is reporting that "the array controller is rebuikding 8 TB SATA HDD at Port1." Active Task is "Logical Drive 2 is rebuilding fault tolerance data. Progress: 86.71%." We've not swapped any of the pphysical drives.
Now I know that it can take a long time for this to complete but is it normal for the progress to have changed from 86.28% to 86.71% in three days? In the past when we've swapped a physical drive and the array rebuilding is usually completed in around 2 or 3 days. Is it possible it's the drive we need to replace as it's taking this long? Next question, I assume drive shouldn't be replaced whilst it's rebuilding the fault tolerance data? Any asistance would be great. Thanks in advance.
I would replace that drive because it seems it is failing. That's most likely the reason why the rebuild is so slow. Did you check the Smart Array controller for additional errors? Is the cache and battery OK?
You may also check the IML log in the iLO. If there was something wrong with your disk, there should be information about it.
Objective: The glucose tolerance test (GTT) is widely used in human and animal biomedical and pharmaceutical research. Despite its prevalent use, particularly in mouse metabolic phenotyping, to the best of our knowledge we are not aware of any studies that have attempted to qualitatively compare the metabolic events during a GTT in mice with those performed in humans.
Results: During the siOGTT in humans, there is a long period (>3hr) of glucose absorption and, accordingly, a large, sustained insulin response and robust suppression of lipolysis and endogenous glucose production (EGP), even in the presence of glucose intolerance. In contrast, mice appear to be highly reliant on glucose effectiveness to clear exogenous glucose and experience only modest, transient insulin responses with little, if any, suppression of EGP. In addition to the impaired stimulation of glucose uptake, mice with the worst glucose tolerance appear to have a paradoxical and persistent rise in EGP during the OGTT, likely related to handling stress.
Conclusions: The metabolic response to the OGTT in mice and humans is highly divergent. The potential reasons for these differences and their impact on the interpretation of mouse glucose tolerance data and their translation to humans are discussed.
I am running a test on the yield strength of some material, but due to some limitations in the measurement setup, I can not run the test until the specimens break. I.e. for example I can only run the test up to an applied force of 1kN. This means I can not find the actual yield strength and its distribution, I only know that I have a number of samples, N, that can take over 1kN of force.
I am wondering if it is possible to treat this with a proportional hazards model, with the force strength treated as if it were the time variable. In other words, rather than data censored by time to event, it would be analyzed as force applied to event. In both cases, the data is censored in that some observations (perhaps many) will not have the event at the end point (either time or force applied). I have no idea if this is a legitimate approach, but the situations seem analogous.
I had also wondered about changing it to a pass/fail test, with the downside that it would increase the required sample size. But with the challenges you mentioned, that may look like the most suitable option.
Along with what @statman suggested, check out this paper -00/vol-38-01/v3801050.pdf in which I believe that they have a similar situation. And they set up experiment at different voltage levels. In your case, you may want to set up tests at different stress levels. The paper uses non-parametric inference. Related parametric inference includes: logistic regression, probit analysis, etc.
Regarding what @dale_lehman suggested, proportional hazard might be related if you have other covariates. Otherwise, other techniques for analyzing time-to-event data can apply. Just use your stress variable in place of the time-to-event variable. The relationship between the one-shot experiment data and time-to-event data and their modeling can be found in this paper: Quantile POD for nondestructive evaluation with hit-miss data, by Yew-Meng Koh and William Q. Meeker. =urnd20
Thanks for chiming in here @BlockMeerkat111. We do have a way of generating these but not in the "traditional sense," i.e. using the Tolerance Intervals red triangle menu option under the Distribution platform (similar how you might report/ generate these in a competitors' statistical package). I am planning to write a JMPer Cable Article that walks through that process and will try and circulate it here once I have it done.
Any update on JMPer Article? Agree with others that this would be a welcome addition to JMP. I'm currently working on a project that requires calculating tolerance intervals from numerous (100s) of simulated non-normal sample sets. I'm working mostly w/ Weibull, Lognormal, and SHASH distributions and would like to be able to calc & export the lower bounds at 90% coverage & 95% confidence for these sample sets into table format to estimate the predicted failure rate of being below the lower bound.
Hello @ChrisGaltress, No update unfortunately, I haven't been able to get to this. Let's try a different approach. Can you please send us an email to sup...@jmp.com and address it to me directly? "Hi Patrick..." Please reference this community post and give the subject line a meaningful header "Calculation of Tolerance Intervals for Non-Normally Distributed Data." I can work with you directly to demonstrate the 'alternate' approach that I am thinking for your case, and to identify its limitations (off hand, we don't support SHASH in Life Distribution which is where I think you would need to go here).
I am looking to find a way to perform matching of 2 fields from different sources, i am using a joint tool to match all the other fields I require to match however i have a field where i need to apply a tolerance of +/- 1.
I suspect that the new field I added contain a character that splits it in more fields when encountered. Although this is not expected since in both reports the field contains very similar text values and I have added quotes in my unload command.
Does anyone has any idea what can go wrong and why one of the files is ingested successfully while the other fails?
Thx
I know this might be tedious but I would try removing fields from your select statement to find the which field is causing the problem. Then I would check for things like commas that might be tricking the query to think that there should be another field.
Hi @duncan . I am afraid I still have not solved that. The queries are quite complex and so far my attempts to remove fields still fails. My main concern is that the query successfully produces the report in Redshift. In redshift, everything seems to be in place and the new field is displayed correctly. The import to QS is what causing the problem. That is why I run out of ideas. I would expect that if the Redshift table is ok, QS should be fine importing it.
Hi @duncan , thanks for the tip, yes importing the dataset from Redshift to QS directly is successful and everything looks as it should. I guess then something is wrong with my UNLOAD command? I find this strange since I specify ADDQUOTES and ESCAPE so that should be enough to correctly identify the text values and special characters of the problematic field and not split it right? I also remind you that adding this field to another dataset did not create any problems! Any ideas?
Thx
Also you can create a a support ticket with AWS support if the problem persists. Here are the steps to open a support case. If your company has someone who manages your AWS account, you might not have direct access to AWS Support and will need to raise an internal ticket to your IT team or whomever manages your AWS account. They should be able to open an AWS Support case on your behalf.
The RREF tolerance will affect the reference voltage for the ADC, but on the other hand it will not affect the value of the differential voltage across the RTD i guess, am I right?
Thank you for any response!
- In choosing between the RM57 and TMS570 architectures, beside the temperature rating differences, what internal process variation exists to inform a decision as to which product line is better aligned for a radiation environment? Ideally, I'd like to stick to the RM57 since those are little-endian processors which would make software development and third-party code integration a bit easier; that being said, if there's a compelling reason to go for the TMS570 lineup for SEL tolerance reasons I would like to get some data for this.
b37509886e