Hellofirst of all I would like to say that you are doing an excellent job in this forum. I plan conduct to an academic study with your software and the forum did help me a lot during preparation!
Now I have two questions concerning my choice based conjoint study. I plan to estimate utilities with HB approach. A friend of mine gave me a HB estimation report file and I am not sure if I understand the values right.
In the first tab of the Excel file there is a summary tab (average utilities zero centered diffs). In the third tab I can see the individual ZC diffs and in the fourth tab the individual raw utilities are reported.
I read that the individual raw utilities are already zero-centered (effect coding) and that the individual ZC diffs are the normalized zero-centered raw utilities. They are normalized by multiplication with a unique constant per respondent to set all respondents on the same scale.
In my study I want to be able to say how the attributes included differ in their impact on preference. In the summary tab I see the average importances. Can I say that the attribute brand for example does has a 40% impact on choice while style does have a 25% impact?
If style is another attribute in my study, can I say which styles are more preferred than others? What is the difference between the average utilities given in tab 1 (average of zero-centered diffs) and if I calculated the average of the individual raw utilities from tab 4? I read that "only differences in utility matter" and multiplying part-worths by a constant has no consequence on the meaning of the data. So I am confused which average I should use and why.
Finally I will need to sum up part-worth utilities to obtain the total utilities for different product concepts to compare them. I would like to say that one product concept is more preferred than another concept for example. In a Marketing Handbook I read that for that purpose one simply sums up part-worths but they do not report which values they summed up, the average zero-centered diffs or the average raw utilities.
Thank you very much. Have a nice day.
--Thomas
Users of conjoint analysis are sometimes confused about how to interpret utilities. Difficulty most often arises in trying to compare the utility value for one level of an attribute with a utility value for one level of another attribute. It is never correct to compare a single value for one attribute with a single value from another. Instead, one must compare differences in values. The following example illustrates this point:
Sometimes we want to characterize the relative importance of each attribute. We do this by considering how much difference each attribute could make in the total utility of a product. That difference is the range in the attribute's utility values. We percentage those ranges, obtaining a set of attribute importance values that add to 100, as follows:
For this respondent, the importance of Brand is 26.7%, the importance of Color is 13.3%, and the importance of Price is 60%. Importances depend on the particular attribute levels chosen for the study. For example, with a narrower range of prices, Price would have been less important.
When summarizing attribute importances for groups, it is best to compute importances for respondents individually and then average them, rather than computing importances using average utilities. For example, suppose we were studying two brands, Coke and Pepsi. If half of the respondents preferred each brand, the average utilities for Coke and Pepsi would be tied, and the importance of Brand would appear to be zero!
Users of ACA or CVA may download a module named IMP.EXE from our Internet home page ( ) that will read files of individual utilities from ACA or CVA and create a file of individual attribute importances, and print the average importances. This should help you in determining and reporting attribute importances.
For one of my attributes the levels have all a small value looking at the average utility (L1:-1,34 / L2: 0,15 / L3: 1,19). What does it mean that the average utilities are all near zero? In generell how would I interpret an average utility with zero?
Another question is if I have two levels, within the same attribute of course, which are not significance to each other. Can I conclude from this that respondent are inidfferent between those to levels and prefer both equal?
I am considering power supply similar to below schematic safely enclosed with my circuit in a box. Apparent power will be around 5-10W, but real power is well under 100mW. There will be several instances deployed, so reactive power adds up since each 10W costs approx $13/y
I haven't analyzed the circuit, but I'm not sure I believe that. What's the zener voltage? What's C1? 220uF? 220pF? It's kinda funny you've got 1W resistors if you only expect to dissipate 100mW total.
A normal house does not have so much reactive load as to make a difference.
Factories with huge motors drawing lots of current with phase difference between current & voltage are what cause the problems.
My house has minimal motors - refridgerator, washer, dryer, water pump for furnace, and the occasional fan (window/ceiling fan, hair dryer, garage door opener, printer motor, computer fan). All small stuff.
Wallwarts these days all use switching regulators to make 5V, 9V, 12V, 20V for laptop chargers, and at pretty high currents. The big monster power bricks with huge transformers and full wave bridge recitifiers and hot-running regulators are fading away fast.
Thanks everyone. It's settled then, power utilities measure real power and residential households are usually slightly inductive (i.e. more inductive loads than capacitive) so adding 10-20W of capacitive power should not add anything to the bill
An electricity meter, electric meter, electrical meter, energy meter, or kilowatt-hour meter is a device that measures the amount of electric energy consumed by a residence, a business, or an electrically powered device. Electric meter or energy meter measures the total power consumed over a time interval. Electric utilities use electric meters installed at customers' premises for billing and monitoring purposes. They are typically calibrated in billing units, the most common one being the k...
Not so fast. The utilities make certain assumptions regarding inductance. Big companies, commercial users, have "capacitor huts" and such to compensate, to bring the PF right.
It may be that the "smart meters" will end the assumptions they calculated for till now.
Industrial user get monitored for the power factor they have. If it ever peaked over a certain value in a three month billing period they get charged at a higher rate per unit. This makes it important to them to have power factor correction to keep them on the lower tariff. This higher tariff is to encourage them to get the power factor down.
Domestic users are not monitored for power factor, so having a good power factor actually costs them more than not.
Some of those blogs linked confuse those two facts and think power factor correction saves money. It only does if you a penalised for poor power factor.
Take the capacitor out of a floressent light and you run it vertically for free remove it from your vacume cleaner and that is free. However most domestic load is resistive so the actual impact on the bill is quite small.
The power compines do not want a high power factor as that increases dissipation in the generator. They monitor the power factor and if everyone started buggering up their power factor there are laws to stop you.
romanz:
It's settled then, power utilities measure real power and residential households are usually slightly inductive (i.e. more inductive loads than capacitive) so adding 10-20W of capacitive power should not add anything to the bill
That is pretty much the case. As Mike points out, industrial users are monitored for their power factor and penalised if it is undesirable. The "old-fashioned" power meters (no longer fitted to any installation as new or replacement) did measure "true" power, "smart" meters are now able (probably all of them) to monitor power factor in addition.
Your load here is capacitive, and not very "spiky" at all. KeithRB has in mind instead, switchmode supplies in which the current is drawn only near the peak of the AC waveform; their power factor is indeed poor but only slightly capacitive; though there are ways of correcting for this in very high power applications using a switching power converter on the input.
The (controlled) rectifier circuits really create harmonics, making them uncasificable (from certain point of view they behave as capacitive but some of the aspects made them different). So do electronic regulators, being the (domestic) most important the light triac regulators.
For a single home it is not important at all (and the counter is not so intelligent to have this into account). The problem arises in big cities, where a lot of people have a big amount of rectifiers consuming (active) power in the same part of the AC cycle.
There are two LED on the front , one is "power usage" light, the other is not identified in the document but it has printed on the case next to it "VA"
It occasionally flashes, like once in 15mins I'd say when the house is really sucking juice.
Paul__B:
Your load here is capacitive, and not very "spiky" at all. KeithRB has in mind instead, switchmode supplies in which the current is drawn only near the peak of the AC waveform; their power factor is indeed poor but only slightly capacitive; though there are ways of correcting for this in very high power applications using a switching power converter on the input.
No, I am thinking of the circuit he has here. Assuming a DC load on the output, the only time the 1000 uF capacitor is fed current is when the AC Voltage through the rectifier exceeds the DC voltage (+ 2 diode drops!). This causes the AC line to spike the current to recharge the capacitor during the short time of the peak.
3a8082e126