This article explains some of the terminology relating to measurement performance and why it’s important to understand the differences between accuracy, linearity, resolution, hysteresis and repeatability.

Maybe you were away from college on the day they did instrumentation theory: you know accuracy, resolution, repeatability and all that stuff. You are in good company – many engineers have either forgotten or have never really understood this area of engineering. The terminology and fairly esoteric technical concepts applied to instrumentation are confusing.

Nevertheless, they are crucial to selecting the right measuring instruments for your application. Get the selection wrong and you could end up paying way over the odds for over specified transducers; get it wrong the other way and your product or control system may lack critical performance.

This article focuses on position transducers and explains some of the terminology; key considerations of specifying appropriate instrumentation for your application and some common pitfalls.

Firstly some definitions:-

  • An instrument’s Accuracy is a measure of its output‘s veracity
  • An instrument’s Resolution is a measure of the smallest increment or decrement in position that it can measure
  • A position measuring instrument’s Precision its degree of reproducibility.
  • A position measuring instrument’s Linearity is a measurement of the deviation between a transducer’s output to the actual displacement being measured

Most engineers get their knickers in a twist about the differences between precision and accuracy. We can explain the difference between accuracy and precision using the analogy of an arrow fired at a target. Accuracy describes the closeness of an arrow to the bullseye.

Fig. 1 An accurate shot

Fig. 2 Precise shots – not necessarily accurate

If many arrows were shot, precision equates to the size of the arrow cluster. If all arrows are grouped together, the cluster is considered precise.

A perfectly linear measuring device is also perfectly accurate.

So, that’s pretty straightforward then – just specify very accurate and very precise measuring instruments every time and you’ll be OK?. Unfortunately, there are some big snags with such an approach. Firstly, high accuracy, high precision instrumentation is always expensive. Secondly, high accuracy, high precision instrumentation may require careful installation and this may not be possible because of vibration, thermal expansion/contraction etc. Thirdly, some types of high accuracy, high precision instrumentation are also delicate and will suffer malfunction or failure with changes in environmental conditions – most notably temperature, dirt, humidity and condensation.

The optimal strategy is to specify what is required – nothing more, nothing less. In a displacement transducer in an industrial flow meter for example – linearity will not be a key requirement because it is likely that the fluid’s flow characteristics will be non-linear. More likely, repeatability and stability over varying environmental conditions are the key requirements.

In a CNC machine tool, for example, it is likely that accuracy and precision will be a key requirement. Accordingly a displacement measuring instrument with high accuracy (linearity), resolution and high repeatability even in dirty, wet environments over long periods without maintenance are key requirements.

A good tip is always to read the small print of any measuring instrument’s specification – especially about how the claimed accuracy and precision varies with environmental effects, age or installation tolerances.

Another god tip is to find out exactly how an instrument’s linearity varies. If the variation in linearity is monotonic or slowly varying, the non-linearity could be easily calibrated out using a few reference points. For example, for a gap measuring device this could be done with some slip gauges. In the example shown below, a fairly non-linear transducer is calibrated in to a highly linear (accurate) device with a relatively low number of reference points.

Fig. 3 Pre-Cal & Post Cal Output of ‘Monotonic’ or Slowly Varying Transducer

However, in this second example, a rapidly varying device is calibrated with 10 points and its linearity hardly changes. It might take >1000 points for such a rapidly varying measurement characteristic to be linearised. Such a process is unlikely to be practical with slip gauges but it might practical by comparing the readings in a look up table against a higher performance reference device such as a laser interferometer.

Fig. 4 Pre-Cal & Post Cal Output of ‘Monotonic’ or Rapidly Varying Transducer

A common pitfall – Optical Encoders

Optical encoders work by shining a light source on to or through an optical element – usually a glass disk. The light is either blocked or passes through the disk’s gratings and a signal, analogous to position, is generated.

The glass disks are amazing – with tiny features which allow manufacturers to claim high precision. What is often not explicit is what happens if these tiny features are obscured by dust, dirt, grease etc. In reality even very small amounts of foreign matter can cause mis-reads. What is more there is seldom any warning of failure – the device simply stops working altogether. This is known as ‘catastrophic failure’. What is less well known is the issue of accuracy in optical encoders and optical encoder kits in particular.

Consider an optical device using a 1” nominal disk with a resolution of 18 bits (256k points). Typically the claimed accuracy for such a device might be +/-10 arc-seconds. However, what should be in big bold print (but surprisingly never is) is that the stated accuracy assumes that the disk rotates perfectly relative to the read head and that temperature is constant.

If we consider a more realistic example, the disk is mounted slightly eccentrically by 0.001” (0.025mm).

Eccentricity comes from several sources and the following is a list of just some:-

  • concentricity of the glass disk on its hub
  • concentricity of the hub’s through bore relative to the optical disk
  • perpendicularity of the hub relative to the plane of the optical disk
  • parallelism of the optical disk face with the plane of the read head
  • concentricity of the shaft on which the hub is mounted
  • clearances in the bearings and bearing mounts which support the main shaft
  • imperfect alignment of the bearings
  • roundness of the shaft and roundness of the hub’s through bore
  • locating method (typically a grub-screw will pull the hub to one side)
  • displacements due to stresses or strain from forces on the shaft’s bearings
  • thermal effects
  • etc. etc.
Imperfectly mounted rotor

Fig. 5 – Imperfectly mounted rotor

A perfectly mounted optical disk requires such fine engineering that cost becomes prohibitive. In reality, there is a measurement error because the optical disk is not where the read head thinks it is. If we consider a mounting error of say 0.001” then the measurement error is equivalent to the angle subtended by 0.001” at the optical track radius. To make the maths easy let’s assume that the tracks are at a radius of 0,5”.

The equates to an error of 2 milliradians or 412 arc-seconds. In other words, the device with a specification accuracy of 10 arc-seconds is more than 40 times less accurate than its data
sheet.

If you get an optical disk to be positioned accurately to within 0.001” of an inch you are doing really well. Realistically, you’re more likely to be in the range 2-10 thousandths of an inch so the actual accuracy will be 80-400 times worse than you might have originally calculated.

The measurement principle of a resolver or a new generation inductive device is completely different. Measurement is based on the mutual inductance between the the rotor (the disk) and the stator (reader). Rather than calculating position from readings taken at a point, measurements are generated over the full face of both the stator and rotor. Consequently, discrepancies caused by non-concentricity in one part of the device are negated by opposing effects at the opposite part of the device. The headline figures of resolution and accuracy are often not as impressive as those for optical encoders. However, what’s key is that this measurement performance is maintained across a range of non-ideal conditions.

Inductive encoder

Fig. 6 Example of a new generation inductive displacement transducer – highly accurate irrespective of environmental conditions

The quoted measurement performance of some of the new generation inductive devices are not quoted based on perfect alignment of rotor and stator but realistically achievable tolerances (typically +/-0,2mm) are accounted for in any quoted resolutions, repeatabilities and accuracies. Further, stated performance for inductive devices are not subject to variation due to foreign matter, humidity, life-time, bearing wear or vibration.

Connect with Zettlex

Ask a question

Technical support

This website uses cookies to provide you with the best user experience and site functionality, and provides us with enhanced site analytics. By continuing to view this site without changing your web browser settings, you agree to our use of cookies. To learn more, please view our privacy policy.