Free Essay

Electronics Reliability

In:

Submitted By Anegron69
Words 6033
Pages 25
Technical Article

Reliability in Electronics
CONTENTS


Introduction
1.1
Failure Rate
1.2 Reliability
1.3 Mean Time Between Failures (MTBF)
Mean Time to Failure (MTTF)
1.4 Service Life (Mission Life, Life)



Factors Affecting Reliability
2.1 Design Factors
2.2 Complexity
2.3 Stress
2.4 Generic (Inherent)



Estimating The Failure Rate
3.1 Prediction
3.1.1 Parts Stress Method
3.1.2 Parts Count Method
3.2 Assessment
3.2.1 Confidence Limits
3.2.2 PRST
3.3 Observation



Prototype Testing



Manufacturing Methods



Systems Reliability
(a) More Reliable Components
(b) Redundancy



Comparing Reliabilities

xppower.com

Introduction
Most of us are familiar with the concepts of reliability and MTBF at a superficial level, without considering what lies behind the figures quoted and what significance should be attached to them. The subject deserves a deeper understanding, so, let us start by having a better look at the terminology.
1.1 Failure Rate (

)

Failure Rate

The failure rate is defined as the percentage of units failing per unit time. This varies throughout the life of the equipment and if (lambda) is plotted against time, the characteristic "bathtub" curve is obtained for most electronic equipment (See Figure 1).

A

B

C

Time
Fig 1. Failure Rate vs. Time

This curve has three regions:
A

Infant mortality.

B

Useful life.

C

Wear out

In region "A", poor workmanship and substandard components cause failures. This period is usually a few hundred hours and a "burn in" is sometimes employed to stop these failures occurring in the field. Note that this does not stop the failures occurring, it just ensures that they happen in-house and not on the customer’s premises.
In region "B",

is approximately constant and it is only for this region that the following analysis applies.

In region "C", components begin to fail through having reached their end of life, rather than by random failures.
Examples are electrolytic capacitors drying out, fan bearings seizing up, switch mechanisms wearing out etc. Well implemented preventive maintenance can delay the onset of this region.
1.2 Reliability (R(t))
There are a large number of definitions and one will get different answers from statisticians, engineers, mathematicians and so on, an essentially practical definition is: The probability that a piece of equipment operating under specified conditions shall perform satisfactorily for a given period of time.
Probability is involved since it is impossible to predict the behaviour with absolute certainty. The criterion for "satisfactory performance" must be defined as well as the operating conditions such as input, output, temperature, load etc.
1.3 Mean Time Between Failures (MTBF), Mean Time To Failure (MTTF)
Strictly speaking, MTBF applies to equipment that is going to be repaired and returned to service, MTTF to parts that will be thrown away on failing. The MTBF is the inverse of the failure rate.
MTBF = 1

2

....(1)

xppower.com
Many people, unfortunately, misunderstand MTBF, and tend to assume that the MTBF figure indicates a minimum, guaranteed, time between failures. This assumption is wrong, and for this reason the use of the failure rate rather than the MTBF is highly recommended.
R(t) =

e-

t = e -t m ....(2)

t

m=

....(3)

( )

Logn 1
R(t)

Where
R(t)

= Reliability

e

= Exponential (2.718)
= Failure Rate

m

= MTBF

Note that for a constant failure rate, plotting reliability against time "t" gives a negative exponential curve (See Figure 2).
1.0

0.9

0.8

0.7

0.6

R(t)0.5

0.4
0.3

0.2

0.1

0.0
0.0

2.0

1.0

Fig 2. Reliability R(t) Plotted Against (

t

3.0

4.0

5.0

t) for a unit with a constant failure rate

When t/m = 1, i.e., after a time “t”, numerically equal to the MTBF figure “m”:
R(t) = e -1 = 0.37

....(4)

Equation (4) can be interpreted in a number of different ways:
(a) If a large number of units are considered, only 37% of them will survive for as long as the MTBF figure.
(b) For a single unit, the probability that it will work for as long as its MTBF figure, is only 37%.
(c) We can say that the unit will work for as long as its MTBF figure with a 37% Confidence Level.
In order to put these numbers into context, let us consider a power supply with a MTBF of 500,000 hours, (a failure rate of 0.2%/1000 hours), or as the advertising would put it "an MTBF of 57 years!"
From eq.(2), R(t) for 26,280 hours (3 years) is approximately 0.95, i.e., if such a unit is used 24 hours a day for 3 years, the probability of it surviving that time is 95%. The same calculation for a ten year period will give a R(t) of 84%.
Now let us consider a customer who has 700 such units. Since we can expect, on average, 0.2% of units to fail per 1000 hours, approximately one unit per month will fail on average, since the number of failures per year is:
0.2
100

*

1
1000

* 700 * 24 * 365 = 12.26

3

xppower.com
1.4 Service Life (Mission Life, Life)

SERVICE LIFE (YEARS)

Note that there is no direct connection or correlation between service life and failure rate. It is perfectly possible to design a very reliable product with a short life. A typical example is a missile for example: it has to be very, very reliable (MTBF of several million hours), but its service life is only 0.06 hours (4 minutes). 25 year old humans have a MTBF of about 800 years, (FR about 0.1%/year), but not many have a comparable "service life". Just because something has a good MTBF, it does not necessarily have a long service life as well (See Figure 3).

Transatlantic
Cable
Human

100

10

Car

PSU

Toaster
1.0

0.1

Missile

103 104 105 106 107 108 109

MTBF

Fig 3. Examples of Service Life vs. MTBF

Factors Affecting Reliability
2.1 Design Factors
The most important factor is good, careful design based on sound experience, resulting in known safety margins. Unfortunately, this does not show up in any predictions, since they assume a perfect design!
It has to be said that a lot of field failures are not due to the classical random failure pattern discussed here, but to shortcomings in the design and in the application of the components, as well as external factors such as occasional voltage surges, etc. These may well be ‘outside specification’ but no one will ever know, all that will be seen is a failed unit. Making the units rugged through careful design and controlled overstress testing is a very important part of making the product reliable.
The failure rate of the equipment depends on three other factors:
• Complexity
• Stress
• Inherent (generic) reliability of the components used
2.2 Complexity
Keep things simple - what isn’t there, can't fail, but be careful: what isn’t there can cause a failure! A complicated or difficult specification will, invariably result in reduced reliability. This is not due to the shortcomings of the design staff, but to the resultant component count. Every component used will contribute to the equipment’s unreliability.
2.3 Stress
In electronic equipment, the most prominent stresses are temperature, voltage, vibration, and temperature rise due to current. The effect of each of these stresses on each of the components must be considered. In order to achieve good reliability, various derating factors have to be applied to these stress levels. The derating has to be traded off against cost and size implications.
Great care and attention to detail is necessary to reduce thermal stresses as far as possible. The layout has to be such that heat-generating components are kept away from other components and are adequately cooled.

4

xppower.com
Thermal barriers are used where necessary and adequate ventilation needs to be provided. The importance of these provisions cannot be overstressed since the failure rate of some components will double for a 10 °C increase in temperature. Note that decreasing the size of a unit without increasing its efficiency will make it hotter, and therefore less reliable!
2.4 Generic (Inherent) Reliability
Inherent reliability refers to the fact that film capacitors are more reliable than electrolytic capacitors, wirewrap connections more reliable than soldered ones, fixed resistors more reliable than pots, and so on. Components have to be carefully selected to avoid the types with high generic failure rates. Quite often, there is a cost trade off - more reliable components are usually more expensive.
3. Estimating the Failure Rate
The Failure Rate should be estimated and measured throughout the life of the equipment:
• During design, it is predicted
• During manufacture, it is assessed
• During the service life, it is observed
3.1 Prediction
Predicting the failure rate is done by evaluating each of the factors effecting reliability for each component and then summing these to get the failure rate of the whole equipment. It is essential that the database used is defined and used consistently. There are three databases in common use: MIL-HDBK-217, HRD5 and Bellcore. These reflect the experiences of the US Navy, British Telecom and Bell Telephone, respectively. Other sources of data are component manufacturers and some large companies like Siemens, Philips, France Telecom or Italtel. Data from these should not be used unless specifically requested by the customer.
In general, predictions assume that:
• The design is perfect, the stresses known, everything is within ratings at all times, so that only random failures occur
• Every failure of every part will cause the equipment to fail.
• The database is valid
These assumptions are wrong. The design is less than perfect, not every failure of every part will cause the equipment to fail, and the database is likely to be at least 15 years out-of-date. However, none of this matters much, if the predictions are used to compare different topologies or approaches rather than to establish an absolute figure for reliability. This is what predictions should be used for.
3.1.1 Parts Stress Method
In this method, each factor affecting reliability for each component is evaluated. Since the average power supply has over 100 components and each component about 7 factors (Typically: stress ratio, generic, temperature, quality, environment, construction, and complexity) this method requires a considerable effort and time. Predictions are usually done in order to compare different approaches or topologies, i.e. when detailed design information is not available and the design itself is still in a fluid state. Under such circumstances, it is hardly worthwhile to spend this effort, and the much simpler and quicker Parts Count Method is used.
3.1.2 Parts Count Method
In this method, all like components are grouped together, and average factors allocated for the group. So, for example instead of working out all the factors for each of the 15 electrolytic capacitors used, there is only one entry of cap. electr.’ and a quantity of 15. Usually only two factors are allocated: generic and quality. The other factors, including stress levels, are assumed to be at some realistic level and allowed for in the calculation. For this reason, the factors are not interchangeable between the two methods. In general, for power supplies, HRD5 gives the most favourable result, closely followed by Bellcore, with MIL-HDBK-217F the least favourable. This depends on the mix of components in the particular equipment, since one database maybe "unfair" on ICs, and an other on FETs. Hence, the importance of comparing results from like databases only.

5

xppower.com
3.2 Assessment
This is the most useful and accurate way of predicting the Failure Rate. A number of units are put on "life test" (more correctly described as a
Reliability Demonstration Test), usually at an elevated temperature, and so the stresses and the environment is controlled. Note, however, that it is not always possible to model the real environment accurately in the laboratory.
During life-tests and reliability demonstration tests, it is usual to apply greater stresses than normal, so that we get to the desired result quicker. Great care has to be applied to ensure that the effects of the extra stress is known and proven to be calculable, and that no hidden, additional failure mechanisms are activated by the extra stress. The usual "extra stress" is an increase of temperature, and its effect can be calculated from the Arrhenius equation, as long as the maximum ratings of the device are not exceeded.
Note that the accelerating effect depends on the activation energy that applies for the chemistry of the particular component. This would indicate that the Acceleration Factor from 25 °C to 50 °C is approx. 5.25, so be suspicious of results based on 0.7 eV, or even 1 eVn.
At the beginning of such a test it is sometimes difficult to distinguish between early failures ("infant mortality", region A) and the first failures belonging to the "constant failure rate" region (region B). In such cases, the Cumulative Distribution Function is plotted on Weibull paper. This paper has double logarithmic scaling such that a constant failure rate will result in a straight line at an indicated gradient of 1. Decreasing FR
(region A) will give a smaller gradient, increasing FR (wearout, region C) a higher gradient. Both the available time and the number of units on test are limited, and so it is of the utmost importance that the maximum amount of useful information is extracted from a limited amount of data. Statistical methods are used to achieve this.
3.2.1 Confidence limits
What we are attempting to do is to predict the behaviour of the large number of units in the field (called the population) from the behaviour of a small number of randomly selected units (called the sample). This process is called Statistical Inference. The results obtained by such means cannot, of course, be completely accurate, and it is therefore essential to establish the degree of accuracy that applies. This is done by estimating the mean value and defining a band or an interval around this estimated mean that will include the actual, true mean value of the complete population. Such an interval is defined by a Confidence Limit, i.e., if we establish that the failure rate is between 1%/1000 hours and 2%/1000 hours with a Confidence Limit of 90%, this means that we expect 90% of the units in the field to exhibit failure rates between these limits, and the other 10% of units to have a lower or higher failure rate. For a population exhibiting a constant failure rate,
=
where

X2 (2r+2), (1-ø)

....(5)
2tN
= demonstrated failure rate with a one-sided higher confidence limit of Ø (phi)

t
N
r
X2 (2r+2), (1-Ø)

=
=
=
=

test time number of units on test number of failures value of the X2 distribution

with probability (1 - Ø) of not being exceeded in random sampling where (2r + 2) is the number of degrees of freedom.
The constants given by this equation are tabulated below for values of r between 0 and 10, and for values for Confidence Limits used in industry).

of 0.6 and 0.9. (These are the usual

r
0

3

93 x 10

230 x 103

1

200 x 103

390 x 103

2

310 x 103

530 x 103

3

420 x 103

670 x 103

4

530 x 103

790 x 103

5

630 x 103

910 x 103

6

730 x 103

1040 x 103

7

830 x 103

1160 x 103

8

930 x 103

1300 x 103

9

1040 x 103

1410 x 103

10

6

Ø = 0.6

Ø = 0.9

1140 x 103

1530 x 103

xppower.com
To use this table divide the factor given by the total number of unit-hours to get the failure rate in %/1000 hours. Let us consider the case when we have 50 units on test and one fails after 4 months (2920 hours): t = 2920, N = 50, r = 1
From the table, we can say with 60% confidence that the failure rate will be less than:
200,000
50 x 2920

=

1.37%/1000 hrs

Alternatively, we can say with 90% confidence that the failure rate will be less than:
390,000
50 x 2920

=

2.67%/1000 hrs

No. Of Units From The P

In the parent population, therefore, we expect 60% of the units to exhibit a failure rate better than 1.37%/1000 hrs (an MTBF of 73,000 hrs.), and therefore 40% of units to have a FR worse than that; or 90% of the units to be better than 2.67%/1000hrs (an MTBF of 37,400 hrs.), and therefore 10% of units to have a FR worse than that (See Figure 4).

’Good’

’Bad’

60%

40%
Failure Rate
10%
90%
1.37%/kh
2.67%/kh
Fig 4. Confidence Limit

However, there is a practical problem with this method: although we get valid answers, the length of time for that answer is a function of the number of failures. Suppose we want to show a FR of 0.5%/1000h at a CL of 60%, and we have 50 units.
We start the test and expect an answer after 23 weeks, if there are no failures. Should we have a failure though, the test time goes out to 48 weeks, or with two failures to 74 weeks! In fact, if we are unlucky, we could test for over a year, only to find, at the end, that we do not meet the required reliability. The test method that we need is one which will give us an answer in a fixed, pre-determined time. Such a method is called the Probability Ratio Sequential Test, or PRST.

7

xppower.com
3.2.2 PRST
Consider what happens if we plot the number of failures against test time in unit-hours. Since the failure rate is constant and the number of units is constant, we expect units to fail, on average, at equal intervals. The resultant graph will be a uniform staircase, with the trend line indicating the failure rate. As we know, the units will fail at random time intervals (not at a uniform interval), however the trend line will still be as described above. So, the trend line is indicative of the final answer, and we shall not get a different answer as the test time increases, just more confidence in that answer. This means that we can draw conclusions early on, by taking some risks, simply by terminating the test at a pre-determined time ("accept"), or at a predetermined number of failures ("reject"). The risk is that the initial few failures are either too few or too many compared to the average, due to the random timing of the failures. The mathematics is complex, but the end result is simple.
Suppose we define the risks as follows:
1. There is a low FR that is acceptable to the producer, and a higher one acceptable to the customer. The ratio of the two values is called the Discrimination Ratio, and is usually 2. The FR and the DR needs to be defined.
2. The risk of rejecting a "good" population on the basis of the PRST test has to be defined, and is called the Producer’s Risk.
3. The risk of accepting a "bad" population on the basis of the PRST test has to be defined, and is called the Consumer’s Risk.
4. The producer’s and customer’s risk is normally equal and between 40% and 10%.

No. Of Fail

Once these risks are defined, the length of time to an "accept" and "reject" result can be calculated, or looked up in tables such as the ones in MIL-HDBK-781. The test will be run according to this plan, and the product accepted or rejected within a fixed time-frame (See Figure 5).

REJECT

ACCEPT

Unit-Hours

Fig 5. PRST

3.3 Observation
This is observing the large population itself (as opposed to the small sample during assessment) and is the final proof and measure of the equipment's reliability. There is, normally, no need for Statistical Methods since there is plenty of data available.
The problems during this phase are two fold:
1. The sheer mechanics of actually collecting and collating the data.
2. The uncertainty of the duty, conditions of use and stresses, or abuse, that the units were subjected to.
Great care has to be exercised in drawing conclusions due to the difficulty of distinguishing between true random failures and misuse in the field (accidental or otherwise).

8

xppower.com
4. Prototype Testing
With all the sophisticated computer analysis, simulation and tolerancing methods available, there is still no substitute for thoroughly testing the maximum number of prototypes. An effort should be made to locate and use components from different batches, especially for critical components. These units must be tested under dynamic conditions to ensure reliability. An effective test is to cycle the temperature, the input, and the load independently. The units should be tested at both maximum and minimum temperature cycling according to this plan.
Cpk analysis of the results is used to ensure that the specification parameter margins are adequate. After testing, these units are normally used as the first batch on the reliability demonstration tests.
At least one unit should be subjected to HALT testing, and several to destructive overstress tests to establish the safety margins.
The timing of these tests is critical - it must not be so early in the development phase that the final circuit is radically different, and it must not be so late that production starts before the results are evaluated. A pitfall to watch out for, if changes are proposed as a result of these tests is that the up-dated units must be subject to long term testing themselves.
5. Manufacturing Methods
This is a separate subject in itself, but there are three main factors contributing to unreliability in manufacture:
• Suppliers
• Manual assembly methods
• Tweaking of settings and parameters
Suppliers must be strictly controlled to deliver consistently good devices, with prior warning of any process changes and any other changes.
These days, with modern QA practices and JIT manufacturing methods, this level of reliability is achieved by dealing with a small number of trusted suppliers. Manual assembly is prone to errors and to some random, unintentional abuse of the components by operators. This creates latent defects, which show up later.
Tweaking produces inconsistency and side effects. A good motto is: if it works, leave it alone; if it does not, find the root cause and do not tweak. There must be a root cause for the deviation, and this must be found and eliminated, rather than masked by the tweak. There are well-established TQM and SPC methods to achieve this. Testing and Quality Assurance has a major part to play. Testing must be appropriate to ensure that the units perform well in the application. Cpk analysis ensures that the specification parameter margins are adequate and controlled. 6. System Reliability
There are two further methods of increasing system reliability. Firstly, more reliable components. MIL standard or other components of assessed quality could be used, but in industrial and commercial equipment, the expense is not normally justified.
Secondly, redundancy. In a system where one unit can support the load, and two units are used in parallel, the system is much more reliable since the system will still work even with one unit failed. Clearly, the probability of two units failing simultaneously is much less than that of one unit failing. This system would have a big size and cost penalty, (twice as big and twice as much) so normally an N+1 system is used, where
N units can support the load, but N+1 units are used in parallel, "2+1" or "3+1" being the usual combinations. Supposing the reliability of each unit under the particular conditions is 0.9826, (m=500,000h, t=1year) the system reliability for an "N+1" system where N = 2 would be
0.9991, an improvement of 20 times. (Nearly 60 times in a 1+1 system).
However, there are many pitfalls in the system design, such as:
1. N units must be rated to support full load.
2. Any part failing must not make the system fail.
3. If any part fails this must be brought to the operator's notice so that it can be replaced.
4. Changing units must not make the system fail (hot plugging).
It is very difficult and tricky to design the system to satisfy items 2 & 3. For example the failure of components that do not effect system operation when all units are OK, but would effect operation if there was a fault (such as an isolating diode going short circuit, or a parallelling wire or connector going open circuit), must be signalled as a problem, and must be repaired. The circuitry necessary to arrange for all this,
(isolating diodes, signalling logic, hot plugging components, current sharing, etc.) has its own failure rate, and so degrades the overall system failure rate. In the following illustrations, this is ignored for simplicity, but in a real calculation, it must be taken into account. In many applications, the only way to detect such latent faults is to simulate a part failing by shutting it down remotely for a very short time. This circuitry will, of course, increase complexity and decrease reliability further still, as well as being dangerous: a system failure could be caused by the test circuit shutting the system down.

9

xppower.com
Calculating system reliability involves the use of the binomial expansion, as follows:
(R + Q)T = [(RT + TR(T-1) Q + (T(T - 1)/2!) R(T-2) Q2 + (T (T - 1) (T - 2)/3!) R(T-3) Q3 + .… + QT]

....(6)

where:
T = Total No. of Units
R = Probability of Success
Q = Probability of Failure = (1-R)
The 1st term is the probability that 0 units will fail,
The 2nd term is the probability that 1 unit will fail,
The 3rd term is the probability that 2 units will fail,
The 4th term is the probability that 3 units will fail,
The 5th term is the probability that 4 units will fail, … and so on.
These terms must be summed as appropriate, based on what combination of part failures gives a system failure.
For example, with 4 units of R = 0.8, the probability of failures is:
0
1
2
3
4

failures failure failures failures failures

:
:
:
:
:

0.84
4 x 0.83 x 0.2
(4 x 3/2) x 0.82 x 0.22
(4 x 3 x 2/3 x 2) x 0.81 x 0.23
0.24

=
=
=
=
=

0.4096
0.4096
0.1536
0.0256
0.0016

So if 1 unit is enough to supply the load, then if there are 0 Failures, or 1F, or 2F, or 3F, the system is still working, hence the system reliability is: 0.4096 + 0.4096 + 0.1536 + 0.0256 = 0.9984
This particular result could have been obtained from special case 2 (any one is OK, this would be a "n+3" system):

1 - 0.24 = 0.9984

If two units are needed to maintain the system, then only 0, 1 and 2 failures are OK (this would be a "n+2" system):
The system reliability is: 0.4096 + 0.4096 + 0.1536 = 0.9728
If three units are needed to maintain the system, then only 0 and 1 failures are OK:
The system reliability is: 0.4096 + 0.4096 = 0.8192
This particular result could have been obtained from special case 1 ("n+1"): 0.84 + 0.2 x 4 x 0.83 = 0.8192
Note that the improvement over one unit is only marginal for such a low reliability (0.8), however this is an effective solution in cases where R > 0.9. If there is no redundancy, the only acceptable case is that of 0 failures: 0.84 = 0.4096. This particular case is the same as the series situation (any part failure causes a system failure).
Special case 1: "n+1" redundancy, identical units.
In this case, 0F and 1F will not cause a system failure, and the reliability is given by the sum of the first two terms of the expansion:
RT = RT + QT{R(T-1)}

....(7)

Special case 2: Redundancy where any one unit is capable of supplying the load:
RT = 1-[(1-RA)(1-RB)(1-RC)(1-RD)]

....(8)

Parts in series:
(any part failure will cause a system failure)
RT = [(RA)(RB)(RC)(RD) …]

....(9)

Availability
Availability is sometimes mentioned in this context, this is defined as:
Availability =

MTBF
MTBF + MTTR

Where MTTR is the mean time to repair.---

10

xppower.com
For good, reliable systems, Availability tends to be 0.99999……, where the mathematics gets tedious and the number difficult to interpret.
In such cases Unavailability is more meaningful, this being (1- Availability) and usually expressed in minutes/year.
Consider the previous example (m=500,000h, t=1year), and assume that MTTR is 3 hours.
Availability is 0.999 994, and Unavailability is 0.000 006 or 3.15 minutes/year.
Now consider the "N+1" system shown (N=2).
Availability will be 0.999 999 694, and Unavailability 0.000 000 306 or 10 seconds/year.
Note however, that we now have 3 units in the system, so service calls will be 3 times as frequent, or in other words the MTBF for service calls = 500,000/3 = 166,700hours.
It is an interesting fact that when using redundancy to improve availability, the service calls to repair system failures gets much less frequent, but the service calls to repair part failures gets more frequent. Since the object of the exercise is to maintain system availability, this is a small price to pay, but the costs of system failure should be weighed against the costs of service maintenance.
In some cases it is possible to either reduce costs or improve system availability further by partitioning, i.e. have different load-groups fed by different power-supply-groups. This is a subject in itself, but as an illustration the level of redundancy in a typical telephone exchange is as follows: • Each switching card is powered by 1+1 redundant dc/dc inverters.
• Each card is duplicated in 1+1 redundancy
• Each bay and its supplies are partitioned.
• The AC/DC supplies feeding a bay are 1+1 redundant.
• The power cables and connections are 1+1 redundant.
• There is a battery backup system at the output of the ac/dcs, feeding independent busbars.
• There is a diesel generator system to back up the mains supply.
The usual design criteria is that since batteries are large, expensive, dangerous and require maintenance, only about 20 minutes of battery backup is provided, which gives enough time for several attempts to start up the diesel generator. (An automatic sequence of 10 attempts).
Since there is, on average, a short failure of the mains every week (MTBF of 170 hours (!)), this is a very necessary precaution.
7. Comparing Reliabilities
The real use of reliability predictions is not for establishing an accurate level of reliability, but for comparing different technical approaches, possibly from different manufacturers, on a relative (comparative) bases. Hence the importance of using the same database, environment etc.
When such comparisons are made, always check that all of the following are satisfied, otherwise the comparison is completely meaningless:
• The database must be stated, and must be identical. Comparing a MIL-HDBK-217F prediction with a MIL-HDBK-217E prediction or an
HRD5 prediction is meaningless – there is no correlation.
• The database must be used consistently and exclusively. The result is meaningless if a different database is used for some component.
The justification may be reasonable, but the result is meaningless.
• The external stresses and environment must be stated and must be identical. (Input, load, temperature, etc.) The result is meaningless if all the environmental details are not stated, or are different.
• The units must be FFF interchangeable in the application. If one is rated at 10A and the other at 5A, the comparison is fair, as long as the load is less than 5A. If the ratings are identical, but one needs an external filter and the other does not, then there is no comparison. (Although, it is possible, sometimes, to work out the failure rate of the external filter and add it to the FR of the unit, using the same database, environment and stress.)
• Comparing a predicted reliability figure with the results of a reliability demonstration test (lifetest) is also meaningless. One could argue that the results of the reliability demonstration are more meaningful, but that depends on the details of the test, the environment and the acceleration factors used. All these factors must be identical when comparing two test results, but in any case comparing test results with predictions is a meaningless comparison.
There are no miracles: if we predict 200,000 hours and an other manufacturer states 3,000,000 hours for a comparable product, then they must have used either a different database, or a different stress level, or a different environment, etc.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]

J.H. Bompas-Smith, "Mechanical Survival" McGrew-Hill, 1973
R.A. Fisher & F. Yates, "Statistical Tables", Oliver & Boyd
P. Bardos, "Reliability in Electronics Production" Proceedings of PCIM, Munich, 1979
P. Bardos, "Introduction to Reliability" Proceedings of Powercon10, SanDiego, 1980
P. Bardos, "Reliability in Electronics Production" Electron, September 1979
P. Bardos, "The reliability of switch mode power supplies – 15 years on" 1986

11

www.xppower.com
---

North American HQ

Asian HQ

XP Power
990 Benecia Avenue, Sunnyvale, CA 94085
Phone : +1 (408) 732-7777
Fax
: +1 (408) 732-2002
Email
: nasales@xppower.com

XP Power
401 Commonwealth Drive, Haw Par Technocentre, Lobby B,
#02-02, Singapore 149598
Phone : +65 6411 6900
Fax
: +65 6741 8730
Email
: apsales@xppower.com
Web
: www.xppowerchina.com / www.xppower.com

North American Sales Offices
Toll Free.........................+1 (800) 253-0490
Central Region...............+1 (972) 578-1530
Eastern Region ............+1 (973) 658-8001
Western Region.............+1 (408) 732-7777

Asian Sales Offices
Shanghai ....................... +86 21 51388389
Singapore.......................... +65 6411 6902

European HQ

German HQ

Distributors

XP Power
Horseshoe Park, Pangbourne,
Berkshire, RG8 7JW, UK
Phone : +44 (0)118 984 5515
Fax
: +44 (0)118 984 3423
Email
: eusales@xppower.com

XP Power
Auf der Höhe 2, D-28357
Bremen, Germany
Phone : +49 (0)421 63 93 3 0
Fax
: +49 (0)421 63 93 3 10
Email
: desales@xppower.com

Australia ..........................+61 2 9809 5022
Balkans ...........................+386 1 583 7930
Czech Rep. ...................+420 235 366 129
Czech Rep. ...................+420 539 050 630
Estonia ................................+372 6228866
Greece ..........................+30 210 240 1961
Israel ................................+972 9 7498777
Japan ..............................+81 48 864 7733
Korea ..............................+82 31 422 8882
Latvia ................................+371 67501005
Lithuania...........................+370 5 2652683
Poland..............................+48 22 8627500
Portugal..........................+34 93 263 33 54
Russia ...........................+7 (495)234 0636
Russia ...........................+7 (812)325 5115
South Africa.....................+27 11 453 1910
Spain..............................+34 93 263 33 54
Taiwan ..............................+886 3 3559642
Turkey ...........................+90 212 465 7199

European Sales Offices
Austria ........................+41 (0)56 448 90 80
Belgium .....................+33 (0)1 45 12 31 15
Denmark ..........................+45 43 42 38 33
Finland........................+46 (0)8 555 367 01
France ......................+33 (0)1 45 12 31 15
Germany....................+49 (0)421 63 93 3 0
Italy ................................+39 039 2876027
Netherlands ...............+49 (0)421 63 93 3 0
Norway.............................+47 63 94 60 18
Sweden ..................... +46 (0)8 555 367 00
Switzerland ................ +41 (0)56 448 90 80
United Kingdom.........+44 (0)118 984 5515

Amtex
Elbacomp
Vums Powerprag
Koala Elektronik
Elgerta
ADEM Electronics
Appletec
Bellnix
Hanpower
Caro
Elgerta
Gamma
Venco
Prosoft
Gamma
Vepac
Venco
Fullerton Power
EMPA

Global Catalog Distributors
Americas ........................................Newark
Europe & Asia...................................Farnell
China ............................Premier Electronics

newark.com farnell.com premierelectronics.com.cn

June-08

Similar Documents

Premium Essay

Hrm Marketing

... Example: A company's performance measure for managers is deficient because it does not measure such aspects of managerial performance as developing others or social responsibility. 3. A contaminated measure evaluates irrelevant aspects of performance or aspects that are not job related. Example: A company's performance measure would be contaminated if it evaluated its managerial employees based on how physically attractive they were. C. Reliability refers to the consistency of the performance measure. 1. Interrater reliability is the consistency among the individuals who evaluate the employee's performance. Example: Professor Wagner's teaching evaluations have interrater reliability since both her students and her peers who visited her classes rated her above average. 2. With some measures, the extent to which all the items rated are internally consistent is important (internal consistency reliability)....

Words: 1717 - Pages: 7

Free Essay

No Idea

...technology, auditor will want to confirm “the electronic confirmation process is secure and properly controlled, the information is obtained directly by the auditor, and the information is obtained from a third party who is the intended recipient.” (AU330) a. In AU-C-330: “factors that may assist the auditor in determining whether external confirmation procedures are to be performed as substantive audit procedures include the following:” Confirming party’s knowledge of the subject matter, the respondent is reliable if they have knowledge about the information being confirmed; the ability/willingness of the intended confirming party to respond (brushing it off/not accept responsibility, too time consuming to respond, legal issues from responding, different currencies, responding not in job duties); The objectivity of the intended party (not reliable if the intended party is part of the entity. b. May also be ineffective when, based on prior years’ audit experience, response rates inadequate or known to be unreliable – should probably obtain audit evidence from other sources at that point c. AU-C-505: “Audit evidence is more reliable when it is obtained from independent sources outside the entity; Audit evidence obtained directly by the auditor is more reliable than audit evidence obtained indirectly or by inference; Audit evidence is more reliable when it exists in documentary form, whether paper, electronic, or other medium.” Reliability: i. “if auditor identifies factors that...

Words: 458 - Pages: 2

Premium Essay

Human Resources Assignment

...Andrew Macritchie HR111 Assignment 4 “How can selectors ensure the candidates they choose are the ones who will perform better than rejected applicants?” (Cooper & Robertson, 1995:3).For all organisations recruiting the best staff for the job can be extremely difficult due to the multiple recruitment processes available and the different traits each employee can possess. When a company is choosing a recruitment process it must look at the reliability of the test by looking at the consistency of results of a test and the validity of the test must also be considered as the test must measure what it is set out to measure. (Arnold et al, 1995:131) A company must also consider the cost and effectiveness of the recruitment process they choose. There are many tools of selection which an organisation can use, for example, interviews, psychometric tests and application forms. With regards to the recruitment of effective routinized service, retail or call-centre workers I believe that psychometric tests are the most appropriate selection process for these jobs. Psychometric testing refers to the testing carried out on individuals in order to measure their ability in a specific area of working. These tests can measure all kinds of traits such as sensitivity, memory, intelligence, aptitude or personality. Psychometric tests are becoming increasingly used by employers to choose the right individual that fits a certain job entry. The term ‘psychometric tests’...

Words: 1103 - Pages: 5

Premium Essay

Ethics of Human Cloning

...Selecting the Best Person for the Job Selecting the Best Person for the Job Having the right people on staff is crucial to the success of an organization. Various selection devices help employers predict which applicants will be successful if hired. These devices aim to be not only valid, but also reliable. Validity is proof that the relationship between the selection device and some relevant job criterion exists. Reliability is an indicator that the device measures the same thing consistently. For example, it would be appropriate to give a keyboarding test to a candidate applying for a job as an administrative assistant. However, it would not be valid to give a keyboarding test to a candidate for a job as a physical education teacher. If a keyboarding test is given to the same individual on two separate occasions, the results should be similar. To be effective predictors, a selection device must possess an acceptable level of consistency. Application forms For most employers, the application form is the first step in the selection process. Application forms provide a record of salient information about applicants for positions, and also furnish data for personnel research. Interviewers may use responses from the application for follow-up questions during an interview. These forms range from requests for basic information, such as names, addresses, and telephone numbers, to comprehensive personal history profiles detailing applicants' education, job experience...

Words: 306 - Pages: 2

Premium Essay

Reliability and Validity

...Reliability and Validity Carmen Kbeir BSHS/382 March 26, 2012 Edessa Jobli Reliability and Validity Researchers employ a wide range of data collection methods to obtain information. Some of these methods are quantitative, such as experiments. Others are qualitative, like field studies. Within each of these methods are specific procedures that lead the researcher to various outcomes. The tools, or instruments, used to measure observations or statistics throughout the process are very important. To understand how well the instruments work and the extent to which the outcomes will produce similar results in the future, researchers examine different types of validity and reliability. Reliability is the extent to which an instrument produces consistent results and the probability that others can achieve the same results when reproducing the study. There are several types of reliability, including alternate-form reliability, internal-consistency reliability, item-to-item or judge-to-judge reliability, and test-retest reliability (Rosnow & Rosenthal, 2008). Internal-consistency reliability measures the amount of correlation between items on a test (Darity, 2008). The average correlation between items is indicated by item-to-item reliability. These types of reliability let the researcher know how well the items on a test go together (Rosnow & Rosenthal, 2008). A questionnaire or survey is not of much use if the questions on them are completely...

Words: 1099 - Pages: 5

Free Essay

Peer

...MA Global Management with pathways Peer Assessment and Evaluation Form Your Team: KFC Seminar 1 Your Name: Aanchal Mahajan Date: 25/10/2012 Name of Peer you are evaluating: Leesha Mansukhani Module: Managing Processes & Projects Coursework: 1 (Team Work) The purpose of this Form is to provide an insight on team members and your own contribution to your team's overall performance. Section A: Peer (Team member) Questionnaire (Please complete one for each of your member) Once this is completed, you are required to complete the same Form (Section A) for yourself, and answer the additional questions in section B below…. **Completed form to be inserted in a sealed envelope OR SIMILAR INDICATING YOUR MODULE CODE and submit to your RELEVANT module leader after EACH OF your team work submission. Students may be viva voce on their parts if required to FURTHER verify their efforts. ** *All work must be word processed* Rate the following factors: Taking the work load as a whole over the life of the team work period e.g. did all students contributes equally in the coursework? Use the criteria factors below to rate your team member. | Criteria | Description of attaining FULL marks- Outstanding - | Description of attaining NO marks- Poor - | *Criteria mark | Typed Justification for mark | Overall average Mark given by you | Regular attendance at team meetings | * Attended all meetings; * stayed to an agreed end; * working within timescale ...

Words: 796 - Pages: 4

Premium Essay

6. Do You Think Companies Can Really Do Without Detailed Job Descriptions? Why or Why Not?

...to obtain the new skills and competencies they needed to accomplish their broader responsibilities. They created skills matrix which listed basic skills needed for that job, minimum level of each skill required for that job or job family. Emphasis is no longer on specific job duties, but on specifying and developing new skills and gave employees constant reminder of what skills they must improve. 1. What is the difference between reliability and validity? In what respects are they similar? Reliability describes the consistency of scores obtained by the same person when retested with the identical or alternate forms of the same test. If a person scores 90 on an intelligence test when retested the result should be the same. Validity indicates whether a test is measuring what it is supposed to be measuring. With employee selection tests, validity often refers to evidence that the test is job related. Reliability and validity are used in statistics and research design. At best, we have a measure that has both high validity and high reliability. It yields consistent results in repeated application and it accurately reflects what we hope to...

Words: 258 - Pages: 2

Premium Essay

Types of Reliability and Validity

...Reliability In the field of Human Services, especially in the creation of programs, projects and approach strategies, it is important to measure the reliability of approaches and results. The following approaches are more often than not used to test reliability and are therefore the embodiment 'types' of how reliable an approach or strategy tested with them is: 1. Test-retest reliability - The test-retest method estimates the reliability of the test and the results gathered from it by administering the test to the same group of people at least twice over a set period. The results are then correlated and with a high reliability mark, the tests are seen as reliable. 2. Alternate Forms - Two groups are given the test and their scores are correlated. They do not necessarily have to come from the same population (i.e. test in India & test in Bangladesh) as long as there is a certain relationship. The score correlation becomes the reliability guide. 3. Split Half reliability - In terms of tests, half the items relate to the other half and it is the correlation between them in the test results that matter (i.e. half of the test or interview is about health and welfare issues and the other is about the economic state of the interviewee). 4. Inter-rater Reliability - As in contests where there are multiple judges, tests, contests, experiments and research are tested and marked by a set of qualified people. Their average assessment is then seen as reliable...

Words: 801 - Pages: 4

Free Essay

Sport M3

...Sit & Reach Suitability- Any client can do this exercise because it easy to do and requires a low amount of physical activity. The restriction would be disable people. Reliability- If I was going to do this test again then I would have to do a quick warm up so that the my body can produce more synovial which would allow me stretch further and would change the result of the test. Validity- the test is a reasonable way to measure the client flexibility of the client because it test the clients hamstring and lower back. If the test is done properly the result will be accurate but if done wrong then the results would be wrong. Practicality- the test is very realistic because all it takes is one piece of equipment which is the sit and reach bench which can be found at any gym or can be brought online or in a sort store but the equipment can be very expensive which can be a letdown because not many people would want to spend lot of money on one equipment . The test fairly easy to set up all you need is a wall or a person to hold the bench so that the client can use it. Push up Suitability- this exercise is suitable to be done by any one, they only thing that can prevent some to do this type of exercise would if they have an injury in their shoulder or any other injuries in the arm. Reliability- if would to do the test again the results would not be the same because the test depend on the person doing it e.g. if the person has worked on that muscle group before then he...

Words: 1246 - Pages: 5

Premium Essay

So App4

...Organizations Professor Elsie Smalls “Application IV” 1. Before interpreting the reliability results for the clerical test and work sample, I feel that it is first essential to define the reliability of measurement and its necessity. According to Heneman, the reliability of measurement places an upper limit on the possible validity of a measure. It is reliable to the extent that it provides a consistent set of scores to represent an attribute. In the majority of the case, perfect reliability is never achieved because of the errors existing in the distinct types of measurement. If tests were to occur the same time more than once, it is more likely to have greater reliability. As noted in the first table of the clerical test, the mean score in the results for the clerical test are very similar in the two tests made by the company, in Time one the mean was 31.61 and in Time 2 the mean was 31.22. On the results of work sample of tactfulness and concern of customer the mean scores are also pretty similar. The results for the standard deviations in Time 1 for the clerical test and work samples are also very close to the results of the standard deviations on Time 2. This is one indicator that the tests made are reliable. Besides this indicator, there other results that tells us that these tests are reliable. The test-retest value is .92 which is a very high value, and it is another indicator of reliability of the results. The coefficient alpha also has high values in both Times (.85 and...

Words: 752 - Pages: 4

Free Essay

Job Analysis

...Job Analysis Amanda Anderson PSY/435 June 23, 2014 Stephanie Johnson Job Analysis There are many jobs that an individual may pursue when they obtain a degree in psychology. One such job is parole officer. This paper will provide insight on the functional job analysis of a parole officer, discuss how a functional job analysis can be used within the organization, evaluate the reliability and validity of a functional job analysis, evaluate different performance appraisal methods and how they may be applied to a parole officer, and will conclude by explaining the various benefits and vulnerabilities of each performance appraisal method concerning the job of a parole officer. Functional Job Analysis The functional job analysis uses both observation and interviews to provide a description of a job and scores on several dimensions concerning both the job and potential workers. These dimensions apply to all jobs so that the procedure can compare them. This process helps to set the recommendations for the job outline. Candidates for the parole officer position should meet the job requirements. The job analysis identifies all of the specific tasks required to perform the job, and then all of the specific knowledge skills and abilities required to perform each task are identified (Spector, 2012). The minimum requirements for a parole officer position in most counties and states include a bachelor’s degree, and that the candidate is at least 20 years old. Federal positions...

Words: 1062 - Pages: 5

Premium Essay

Psychology

...PSYCHOLOGY These are the steps to follow when developing a psychological measure can seem daunting and complex. There are nine basic steps which need to be followed: 1.THE PLANNING PHASE This is where the aim of the measure needs to be decided on and stated. The characteristic or construct to be measured, what the measure will be used for, and the target group (population) for the measure will also need to be defined. Once this has been clarified, one can decide how the test scores will affect decisions (or what decisions can be made based on test scores). An important stage in planning is whether the performance is compared to a criterion or a group norm. In order to define the content of a measure, one needs to have a defined purpose of a measure. The construct needs to be operationally defined, by undertaking a literature review (research process) of the main theoretical viewpoints of the construct. The purpose of the measure is clearly vital, as it serves the basis for constructing the measure. In this phase, 'keying' is used ? where information is gathered about the 'aspects of the construct on which these groups usually differ'. (An Introduction to Psychological Assessment. Foxcroft and Roodt. P72). e.g. Items are needed to discriminate between individuals, so as to allow the assessor to view the various 'risk' groups. The format and number of each type of item is the next step in the planning phase. The format of the test will vary according to the construct being...

Words: 2475 - Pages: 10

Free Essay

Collaborative Discussion

...COLLABORATIVE DISCUSSION There are many ways to find out the source is reliable, especially since anyone can publish on the web. Prior to writing a paper, extensive research will aid in making sure the facts are substantial. Only updated material should be used. This information is always available. The most important issue is to cite the sources. There is different criterion that is used to evaluate sources of information. All information needs to support what the writer is trying to accomplish and who the author is. Publication dates should also be recent. In addition, the information should come from a valid source. All information needs to be relevant to the topic. Keywords are located in the information being researched. Facts and details are found in the information. Redundancy in reading assists in making sure that the paper make sense. There are many ways to determine if the source is reliable. Most authors have updated information on the web. Any printed text should have a link or Website about the author; otherwise, it should not be considered reliable. Websites, such as Wikipedia, contain information contains information that is added to by anyone. Extensive attention must be considered to make sure that the work is not just an opinion. There are many tools accessible through the university that will assist to verify whether or not the source is reliable. A paper consists of various criteria to evaluate sources of information. Any information used needs to...

Words: 489 - Pages: 2

Premium Essay

Alexander Technique Assignment 1

...Will Longo February 15th, 2015 Unreliable Sensory Appreciation Have you ever discovered (perhaps through feedback from a coach or seeing yourself on video) that like Alexander, you were not doing what you thought you were doing? Yes, this occurred when I was competing as a senior on my High School varsity tennis team. In applying to colleges, many University tennis coaches required that I submit a video of myself participating in a USTA sponsored tennis tournament. After competing in the tournament and reviewing the video that my father recorded, I realized that I was not bending my knees correctly and as low as I previously thought. In tennis, bending your knees in relation to the height of the incoming ball is crucial for maximizing the strength and accuracy of ones return shot. After observing other minor flaws in my tennis game, I went to tennis practice with the intent on practicing how to best bend my knees to the correct position and ultimately improve my game. How do you know when you are ‘too tense’? Within this chapter, Gelb discusses that concept of ‘sensory appreciation’ and synthesizes that humans do not intuitively know what is best for themselves, but have the remarkable ability to learn. For instance a person may feel more enjoyable while being slumped in a chair, but that position and feeling of enjoyment may also be causing unnecessary pressure or damage of ones spine. I may be tense and not realize it because over time, I may have associated...

Words: 477 - Pages: 2

Free Essay

Ways to Rock F-Cat

...Ways to Rock The Prepping you for the FCAT. Prepping you for the FCAT. L/A!!!!! Table of Contents Chapter 1-Reference and Research/Reliability/Validity/Synthesizing Page 1-2. Reference * What is Reference ? * Why is Reference important? * Think about it. Page 3-4. Research * What is Research? * Why is Research important? * Think about it. Page 5-6. Reliability * What is Reliability? * Why is Reliability important? * Think about it. Page 7-8. Validity * What is Validity? * Why is Validity important? * Think about it. Page 9-10. Synthesizing * What is Synthesizing? * Why is Synthesizing important? * Think about it. Table of Contents Chapter 3-Context Clue/Inference Page 1-2. Context clue * What is Reference? * Why is Reference important? * Think about...

Words: 1500 - Pages: 6