My first post to ask for some assistance. On one of our servers, HPE Smart Storage is reporting that "the array controller is rebuikding 8 TB SATA HDD at Port1." Active Task is "Logical Drive 2 is rebuilding fault tolerance data. Progress: 86.71%." We've not swapped any of the pphysical drives.
Now I know that it can take a long time for this to complete but is it normal for the progress to have changed from 86.28% to 86.71% in three days? In the past when we've swapped a physical drive and the array rebuilding is usually completed in around 2 or 3 days. Is it possible it's the drive we need to replace as it's taking this long? Next question, I assume drive shouldn't be replaced whilst it's rebuilding the fault tolerance data? Any asistance would be great. Thanks in advance.
I would replace that drive because it seems it is failing. That's most likely the reason why the rebuild is so slow. Did you check the Smart Array controller for additional errors? Is the cache and battery OK?
You may also check the IML log in the iLO. If there was something wrong with your disk, there should be information about it.
I suspect that the new field I added contain a character that splits it in more fields when encountered. Although this is not expected since in both reports the field contains very similar text values and I have added quotes in my unload command.
Does anyone has any idea what can go wrong and why one of the files is ingested successfully while the other fails?
Thx
I know this might be tedious but I would try removing fields from your select statement to find the which field is causing the problem. Then I would check for things like commas that might be tricking the query to think that there should be another field.
Hi @duncan . I am afraid I still have not solved that. The queries are quite complex and so far my attempts to remove fields still fails. My main concern is that the query successfully produces the report in Redshift. In redshift, everything seems to be in place and the new field is displayed correctly. The import to QS is what causing the problem. That is why I run out of ideas. I would expect that if the Redshift table is ok, QS should be fine importing it.
Hi @duncan , thanks for the tip, yes importing the dataset from Redshift to QS directly is successful and everything looks as it should. I guess then something is wrong with my UNLOAD command? I find this strange since I specify ADDQUOTES and ESCAPE so that should be enough to correctly identify the text values and special characters of the problematic field and not split it right? I also remind you that adding this field to another dataset did not create any problems! Any ideas?
Thx
Also you can create a a support ticket with AWS support if the problem persists. Here are the steps to open a support case. If your company has someone who manages your AWS account, you might not have direct access to AWS Support and will need to raise an internal ticket to your IT team or whomever manages your AWS account. They should be able to open an AWS Support case on your behalf.
I am running a test on the yield strength of some material, but due to some limitations in the measurement setup, I can not run the test until the specimens break. I.e. for example I can only run the test up to an applied force of 1kN. This means I can not find the actual yield strength and its distribution, I only know that I have a number of samples, N, that can take over 1kN of force.
I am wondering if it is possible to treat this with a proportional hazards model, with the force strength treated as if it were the time variable. In other words, rather than data censored by time to event, it would be analyzed as force applied to event. In both cases, the data is censored in that some observations (perhaps many) will not have the event at the end point (either time or force applied). I have no idea if this is a legitimate approach, but the situations seem analogous.
I had also wondered about changing it to a pass/fail test, with the downside that it would increase the required sample size. But with the challenges you mentioned, that may look like the most suitable option.
Along with what @statman suggested, check out this paper -00/vol-38-01/v3801050.pdf in which I believe that they have a similar situation. And they set up experiment at different voltage levels. In your case, you may want to set up tests at different stress levels. The paper uses non-parametric inference. Related parametric inference includes: logistic regression, probit analysis, etc.
Regarding what @dale_lehman suggested, proportional hazard might be related if you have other covariates. Otherwise, other techniques for analyzing time-to-event data can apply. Just use your stress variable in place of the time-to-event variable. The relationship between the one-shot experiment data and time-to-event data and their modeling can be found in this paper: Quantile POD for nondestructive evaluation with hit-miss data, by Yew-Meng Koh and William Q. Meeker. =urnd20
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
In response to general request of CIE Technical Committee TC 1.3 for evaluation of the proposed small-color-difference formula, tentative CIE 1976 L a b, unit color-difference contours generated this formula have been calculated and compared to cross sections in the chromaticity diagram of ellipsoids fitted nearly optimally to relevant visual color-tolerance data.
For example, I am running the Integrate tool on a point dataset, as I want to move points that are within a certain distance from each other so that they are coincident. Then I can run the Collect Events tool to get a final result that sums up the total number of points that are coincident throughout the dataset.
However, I can't figure out how the XY tolerance works. I entered an XY tolerance of 3km, and expected the result to move any points that are within 3km of each other, but leave points that are further than 3 km away. Instead, it's moving points that are as much as 8.48km apart so that they are coincident (but as soon as the points are 8.49km apart, then the tool leaves them alone).
The maximum distance a coordinate could move to its new location during such operations is the square root of 2 times the x,y tolerance. The clustering algorithm is iterative, so it is possible in some cases for coordinate locations to shift more than this distance.
The exact value of the XY tolerance isn't the point for my question. I realize that you have to choose the value wisely depending on exactly what you are trying to do with your data. I'm working at a very small scale, so 3km actually isn't that large.
My question is more that when I put in 3km as the tolerance, the actual tolerance seems to be greater than 8km. I could replicate the same thing at a different scale, and specify an XY tolerance of 3m instead - but it will still move points that are more than 8m apart in my dataset, and I'd like to know why that is. It's not acting as I would expect, so I seem to be missing something with how it works.
Can you show your point pattern. I am just concerned that there comment implies a limitation is spacing relative to some other value that isn't documented (ie spacing relative to dataset extent for instance)
In my case, with the 3km tolerance, the coordinate could move 4.24km, so when that's doubled, it means two points that are 8.48km apart could each move 4.24km to a new location between the original points.
In the days of ArcInfo Workstation, this was known as "fuzzy creep". This happened a lot when doing thins like "Clean", so whenever possible we tried to do a "Build". Of course this has nothing to do with this question really...just a bit of trivia.
This is an ancient topic but still valid. I have an enhancement request for this tool. The process and maximum movement makes sense but it is bit hazy to figure out after process how big the movement in real life there was. The good addon would be if each movement will be somehow stored in table. So some sort of Distance statistics telling how far each point was moved to new location.
In 1985, FAO published a revised version of Irrigation andDrainage Paper No. 29. This publication incorporated an extensive list of cropsalt tolerance data. Since then, Maas and Grattan (1999) have published updatedlists of salt tolerance data. This annex reproduces these data together with theintroductory sections.
The salt tolerance of a crop can best be described by plottingits relative yield as a continuous function of soil salinity. For most crops,this response function follows a sigmoidal relationship. However, some crops maydie before the seed or fruit yields decrease to zero, thus eliminating thebottom part of the sigmoidal curve. Maas and Hoffman (1977) proposed that thisresponse curve could be represented by two line segments: one, a toleranceplateau with a zero slope, and the other, a concentration-dependent line whoseslope indicates the yield reduction per unit increase in salinity. The point atwhich the two lines intersect designates the threshold, i.e. the maximum soilsalinity that does not reduce yield below that obtained under non-salineconditions. This two-piece linear response function provides a reasonably goodfit for commercially acceptable yields plotted against the electricalconductivity of the saturated paste (ECe).ECe is the traditional soil salinity measurement with units ofdecisiemens per metre (1 dS/m = 1 mmho/cm). For soil salinities exceeding thethreshold of any given crop, relative yield (Yr) can beestimated with the following equation:
b1e95dc632