2013 ~ Learning Instrumentation And Control Engineering Learning Instrumentation And Control Engineering

Process Control Basics: Feedforward and Closed Loop Control

Custom Search

Feedforward Control
Feedforward control is a control system that anticipates load disturbances and controls them before they can impact the process variable (PV). It is a form of open loop control, as the process variable is not used in the control action. In feedforward control the major process variables(A1, A2 & A3) are fed into a model to calculate the manipulated variable (MV) required to control at setpoint (SP). For feedforward control to work effectively, the user must have a mathematical understanding of how the manipulated variable(s) will impact the process variable(s)




Process Control Basics

Custom Search

What is a Process?
The word ‘Process’ used popularly in process control and the process industry refers to the ways and methods applied in changing or refining raw materials into end products suitable for mankind. The raw materials which can either be in a liquid, gaseous or a mixture of solid and liquid(slurry) are during processing transferred, measured, mixed, heated or cooled, filtered, stored, or handled in some other way to produce the end product.
Process industries include the chemical industry, the oil and gas industry, the food and beverage industry, the pharmaceutical industry, the water treatment industry, and the power industry.

What is Process Control?
Process Control refers to the methods used to control process variables during the manufacture




How to Size an Orifice Plate Flow Meter with Software

Custom Search


Orifice plate sizing is normally done these days with software provided by the manufactures of the device. This tutorial will use the Daniel Orifice Flow Calculator version 3.1.
Before going through this Orifice sizing tutorial please read:
The Orifice Flow Meter Equation
A Guide to Sizing Orifice Plate Flow Meters

In these articles, you will familiarise yourself with the Orifice plate flow equation and the various correction factors applied when determining flow from Orifice plate flow meter installations. You will also learn about the parameters that affect Orifice plate sizing the most.

To size an orifice plate with software, you require the following process parameters:




A Guide to Sizing Orifice Plate Flow Meters

Custom Search

The Orifice plate is a very robust flow measurement device. It is one device that is very easy to use and can easily be adaptable to many flow measurement applications. Its cost of operation is minimal and familiarity with the device is near universal. All these pluses make the Orifice plate the first choice measurement device in almost every flow application. It does however have some limitations which makes the sizing process a little tricky. Two key limitations include:

(a) Limited Turn Down – See Flow Meters Accuracy and Terminology
(b) Non-linear loss of accuracy at low flow rates as shown by the graph below:




The Orifice Flow Meter Equation

Custom Search

The orifice flow meter is one of the most popular flow devices for measuring flow. It has proven its mettle in the both liquid and gaseous applications. In the Natural gas industry, the orifice plate continues to play a dominant role in flow measurement applications.

In the orifice flow measurement application, changes in static pressure, temperature and density are critical. In liquid systems static pressure changes have no effect on liquid density but in gaseous systems, a change in static pressure significantly impacts density due to the compressible nature of gaseous systems. Change in temperature affects both liquid and gaseous densities and as such are




Basics of A Five Point Calibration

Custom Search

Owing to the physical limitations of measuring devices and the system under study, every practical measurement will always have some errors.Several types of errors occur in a measurement system. These include;

Static Errors:
They are caused by limitations of the measuring device or the physical laws governing its behaviour.

Dynamic Errors:
They are caused by the instrument not responding fast enough to follow the changes in measured variable. A practical example can be seen in a situation where the room thermometer does not show the correct temperature until several minutes after the temperature has reached a steady value.

Random Errors:
These may be due to causes which can not be readily established; could also be caused by random variations in the system under study.

Basic Steps in Instrument Calibration
Calibration is a process where by we ascertain the output of an instrument after being used over a definite period, by measuring and comparing against a standard reference and to carry out the necessary adjustments required to confirm whether its present accuracy conforms to that specified by its manufacturer.
There are three basic steps involved in the calibration of an instrument. These include:




How to Convert Resistance to Temperature

Custom Search

Resistance is the electrical property of a material that opposes the flow of electricity through it. This degree of resistance to electricity is defined by another property of the material called Resistivity.
The resistivity of a material is defined as the resistance to current flow between the opposite faces of a unit cube of the material (ohm per unit length). Hence the resistance R of a component is expressed by:

R = ρL/A

Where:
R = Resistance of the component in Ohms
L = Length of the component
A = Cross sectional Area of the component
ρ = Resistivity of the material
To use the above formula, L and A must be in compatible units




How to Calibrate a Pressure Gauge With a Dead Weight Tester

Custom Search

Basic Operating Principle

Deadweight Testers (DWT) are the primary standard for pressure measurement. There are three main components of this device: a fluid (oil) that transmits the pressure, a weight and piston used to apply the pressure, and a connection port for the gauge to be calibrated.

The dead weight tester also contains an oil reservoir and an adjusting piston or screw pump. The reservoir accumulates oil displaced by the vertical piston during calibration tests when a large range of accurately calibrated weights are used for a given gauge. The adjusting piston is used to make sure that the vertical piston is freely floating on the oil. Please see How a Dead Weight Tester Works for a detailed description of the working principle of the device.

Calibration Basics

To carry out tests or calibrate a pressure gauge with the dead weigh tester(DWT), accurately calibrated weight masses (Force) are loaded on the piston (Area), which rises freely within its cylinder. These weights balance the upward force created by the pressure within the system:
PRESSURE = FORCE/AREA = W/A
So for each weight added, the pressure transmitted within the oil in the dead weight tester is calculated with the above formula because the area of the piston of the tester is accurately known.

Note:
if weights are in pounds (lbs) units and the area in inch square, then the calculated pressure unit is in Pounds per square inch(PSI).

If the weights are in kilogram (kg) units and the area of piston in meters square, then the calculated pressure [P = (W*G)/A, G = gravity in m/s2] unit is in N/m2 or pascal.

During calibration, the system is primed with liquid from the reservoir, and the system pressure is increased by means of the adjusting piston. As liquids are considered incompressible, the displaced liquid causes the piston to rise within the cylinder to balance the downward force of the weights.

Calibrating a Pressure Gauge with the Dead Weight Tester:

To calibrate a pressure gauge with a dead weight tester, set up the device on a level, stable workbench or similar surface as shown in the diagram below:
Proceed with the calibration according to the following steps:
Step 1:
Connect the pressure gauge to the test port on the dead weight tester as shown in the diagram above. Ensure that the test gauge is reading zero, if not correct the zero error and ensure that the gauge is reading zero before proceeding with the calibration exercise.

Step 2:
Select a weight and place it on the vertical piston

Step 3:
Turn the handle of the adjusting piston or screw pump to ensure that the weight and piston are supported freely by oil.

Step 4:
Spin the vertical piston and ensure that it is floating freely.

Step 5:
Allow a few moments for the system to stabilize before taking any readings. After system has stabilized, record the gauge reading and the weight.

Step 6:
Repeat steps 2 through 5 for increasing weights until the full range or maximum pressure is applied to the gauge and then decreasing weights until the gauge reads zero pressure. Calculate the error at each gauge reading and ensure that it is within the acceptable accuracy limits.

If you are doing a five point calibration, then increasing weights should be added corresponding to 0%, 25%, 50%, 75%, and 100% of the full range pressure of the pressure gauge. And for decreasing pressure you proceed in the order 100%, 75%, 50%, 25%, 0%.

For pressure gauges with less accuracy specifications, calibration at the points: 0%, 50% and 100% will suffice.

After calibration, your data can be recorded in a table in this manner:

Upscale Readings:

% Input Weights,W DWT Pressure (W/A)* Test Gauge Pressure Error
0



25



50



75



100




*DWT Pressure = W/A, if W is in lbs, and A in square inch  then DWT Pressure is in PSI(pounds per square inch). However, if W is in kg and A in square meters, then :
DWT Pressure = (W*G)/A, Where G = gravity in meters per seconds square(m/s2) and the DWT Pressure is in N/m2 or Pascal

Downscale Readings:

% Input Weights,W DWT Pressure (W/A)* Test Gauge Pressure Error
100



75



50



25



0




At each pressure reading, the absolute error is calculated thus:
Absolute Error = DWT Pressure – Test Gauge Pressure
The absolute error at each point should be within the acceptable accuracy limits of the gauge.

If the gauge error is in % span proceed as follows to calculate the error:
Span = Maximum pressure – minimum pressure
%Error = [(DWT Pressure – Test Gauge Pressure)/Span]*100  for each pressure gauge reading.
The error in % span should be within the acceptable accuracy limits otherwise the calibration will have to be repeated to correct the errors.

If the pressure gauge error is in % FSD(Full Scale Deflection), proceed as follows to calculate the error:
% Error = [(DWT Pressure - Test Gauge Pressure)/FSD]*100

Correction Factors:

The deadweight tester has been calibrated to the Gravity, Temperature, and Air Density stated on the calibration certificate right from the laboratory.
Equations and factors are given on the certificate to adjust for any variations in these environmental conditions.
Always refer to the documentation for the Dead Weight Tester to ensure that for maximum accuracy, the necessary calibration correction factors are applied to any reading from the device.

Gravity Correction

Gravity varies greatly with geographic location, and so will the deadweight tester
reading. Due to the significant change in gravity throughout the world (about 0.5%), ensure that the tester in your possession has been manufactured with the specification of your local gravity, otherwise you  may have to apply the correction for the calibrated gravity.

To correct for gravity use:
True Pressure = [(Gravity (CS))/(Gravity(LS))]*P(Indicated) 
Where:
P(Indicated)  = Pressure indicated by gauge being calibrated
Gravity(CS)  = Gravity at Calibration Site
Gravity (LS) = Gravity at Laboratory Site

Temperature Correction

Temperature and Air Density variations are less significant than gravity. Variations should be corrected for when maximum accuracy is required.
To correct for Temperature variation use:

True Pressure = P(Indicated) [1+ {T(DWTCT) – T(OT)}*{ΔP/100}] 
Where:
P(Indicated)    = Pressure indicated by gauge being calibrated
T(DWTCT)     = Dead Weight Tester calibrated temperature in the laboratory
T(OT)              = Operating temperature at calibration site
ΔP                    = Percentage pressure change per unit temperature change




Ultrasonic Level Sensors - Operating Principle

Custom Search


Ultrasonic level sensors measure level by measuring the distance from the transmitter (usually located at the top of a vessel) to the surface of a process material located further below inside the vessel or tank. The time for a sound wave to travel back and forth the process material surface is used to calculate this distance, and is interpreted by the transmitter electronics as process level.

The transmitter electronics module contains all the power, computation, and signal processing circuits and an ultrasonic transducer. The transducer consists of one or more piezoelectric crystals for the transmission and reception of the sound waves. When electrical energy is applied to the piezoelectric crystals, they move to produce a sound signal. When the sound signal is reflected back, the movement of the reflected sound wave generates an electrical signal; this is detected as the return pulse. The transit time, which is measured as the time between the transmitted and return signals, is then used to infer the level of a vessel. The basic design of an Ultrasonic level instrument is shown below:




How to Calibrate a Current to Pressure Transducer

Custom Search

Basic Operation
The I/P transducer receives a 4‐20 mA DC input signal from a control device and transmits a proportional field‐configurable pneumatic output pressure to a final control element usually a control valve. The I/P converter is typically used in electronic control loops where the final control element is a control valve assembly that is pneumatically operated. In most applications the I/P transducer is mounted on a control valve or very close to it in a mounting bracket.
See How a Current to Pressure Transducer Works




Transmitters Used in Process Instrumentation

Custom Search

For a process to be adequately controlled and manipulated, the variable of interest in the process (e.g. Temperature, Pressure or Flow) often called the Process Variable (PV) needs to be measured by a sensor which converts the measurement into a suitable signal format (4 – 20mA or digital) and then transmit it to a controller which makes the control decision and finally acts on a final control element in the control loop. What does this signal transmission is referred to as a transmitter. The schematic below illustrates the interactions between all the elements in the control loop:

Elements of a Process Control Loop

 
What is a Transmitter?
A Transmitter is a device that converts the signal produced by a sensor into a standardized instrumentation signal such as 3-15 PSI air pressure, 4-20 mA DC electric current, Fieldbus digital signal etc., which may then be conveyed to an indicating device, a controlling device, or both. The indicating or controlling device is often located in a centralized control room. The transmitter often combines a sensor and the transmitter in a single piece. The sensor measures the process variable and generate a proportional signal. The transmitter then amplifies and conditions the sensor signal for onward transmission to the receiving or controlling device.




Volume Flow Rate in Liquid and Gas Measurement

Custom Search


On this blog, we have discussed in the past various flow meter technologies where a volumetric flow rate is required. It is also important that we discuss the unit of flow measurement in some of these flow meter technologies. This post intends to increase your understanding of volumetric flow rates in liquid and especially gas flow measurements.

As we may have seen, the majority of flow meter technologies operate on the principle of interpreting fluid flow based on the velocity of the fluid. Some of the flow meter technologies using this principle include:
(a) Ultrasonic flow meters
(b) Turbine Flow meters
(c) Orifice Flow meters etc.

In these velocity-based flow meters, fluid velocity can easily be translated into volumetric flow by using the continuity equation below:

                               Q = AV
Where:
Q = Volumetric Flow rate
A = Cross-sectional area of flow meter throat
V = Average fluid velocity at throat section





How to Calibrate Smart Transmitters

Custom Search


In our last discussion: Introduction to Smart Transmitters, we have seen that a smart transmitter is remarkably different from that of a conventional analog transmitter. Consequently the calibration methods for both devices are also very different. Remember that calibration refers to the adjustment of an instrument so its output accurately corresponds to its input throughout a specified range. Therefore a true calibration requires a reference standard, usually in the form of one or more pieces of calibration equipment to provide an input and measure the resulting output. If you got here looking for information on analog pressure transmitter calibration, you may consult: How to Calibrate Your DP Transmitter


The procedure for calibrating a smart digital transmitter is known as Digital trimming. A digital trim is a calibration exercise that allows the user to correct the transmitter’s digital signal to match plant standard or compensate for installation effects. Digital trim in a smart transmitter can be done in two ways:




Basics of Smart Transmitters

Custom Search


Smart Transmitters are advancement over conventional analog transmitters. They contain microprocessors as an integral unit within the device. These devices have built-in diagnostic ability, greater accuracy (due to digital compensation of sensor nonlinearities), and the ability to communicate digitally with host devices for reporting of various process parameters.

The most common class of smart transmitters incorporates the HART protocol. HART, an acronym for Highway Addressable Remote Transducer, is an industry standard that defines the communications protocol between smart field devices and a control system that employs traditional 4-20 mA signal.

Parts of a Smart Transmitter:
To fully understand the main components of a smart transmitter, a simplified block diagram of the device is shown below:
Fig A Block Diagram of a Smart Transmitter

The above block diagram is further simplified to give the one below:
Fig B Simplified Block Diagram of a Smart Transmitter


As shown above in fig A, the smart transmitter consists of the following basic parts:




Basics of Flow Measurement with the Orifice Flow Meter II

Custom Search

Accuracy and Rangeability of Orifice Metering Systems
The performance of the orifice meter system just like other differential pressure flow meters depend on the precision of the orifice plate and the accuracy of the differential pressure sensor. Orifice plate accuracy is rated in percentage of actual flow rates whereas the differential pressure transmitters have their accuracy rated in percentage of calibrated span. Due to the fact that flow rate is proportional to




Basics of Flow Measurement with the Orifice Flow Meter I

Custom Search

The differential pressure measurement method is a universally utilized measuring principle for flow measurement. The orifice flow meter is a type of differential pressure flow meter that can be used for measuring gases and liquids.

As shown in Flow Instrumentation: principles and Formulas, we know that the relationship between flow and differential pressure in a flow restriction device like the orifice meter is given by:

$Q = K\sqrt{\frac{\Delta {P}}{ρ}}$

Where
k = a constant
ΔP = differential pressure across device
ρ = density of the fluid.

In the above formula, fluid density is a key factor in flow measurement computation in both liquids and gases. If fluid density is subject to change over time, we will need some means to continually calculate ρ so that our inferred flow measurement will remain accurate. Variable fluid density is typically experienced in gas flow measurement, since all gases are compressible by definition. A simple change in static gas pressure within the pipe is all that is needed to make ρ change, which in turn affects the relationship between flow rate and differential pressure drop. Therefore in gas flow measurement, change in fluid density with static pressure is compensated for.





Operating Principle of Non-Contacting Radar Level Sensors/Gauges (Unguided Wave)

Custom Search

Radar level instruments measure the distance from the transmitter/sensor (located at some high point) to the surface of a process material located further below in much the same way as ultrasonic level sensors, by measuring the time-of-flight of a traveling wave and then determine the level of the process material. They are regarded as continuous level measurement devices because they continue to measure level even as the level of the liquid in the vessel changes
The fundamental difference between a radar level instrument and an ultrasonic level instrument is the type of wave used. Radar level instruments use radio waves instead of sound waves used in ultrasonic instruments. Radio waves are electromagnetic in nature (comprised of alternating electric and magnetic fields), with very high frequency in the microwave frequency range – GHz.

There are two basic types of level radar instruments: guided-wave radar and non-contact wave radar. Guided-wave radar instruments use wave guide “probes” to guide the radio waves into the process liquid while non-contact radar instruments send radio waves out through open space to reflect off the process material. Note that guided-wave radar instruments are used in applications where the dielectric of the process liquid is quite low. All radar level instruments use an antenna to broadcast or send radio signals to the process liquid whose level is to be determined. The diagram below illustrates these two approaches:




Operating Principle of Displacer Level Sensors

Custom Search

Displacer level sensors use Archimedes’ Principle to detect liquid level by continuously measuring the weight of a displacer rod immersed in the process liquid. The displacer is cylindrical in shape with a constant cross-sectional area and made long or short as required. Standard heights range from 14 inches to 120 inches. As liquid level increases, the displacer rod experiences a greater buoyant force, making it appear lighter to the sensing instrument, which interprets the loss of weight as an increase in level and transmits a proportional output signal. As liquid level decreases, the buoyant force on the displacer rod decreases with a corresponding weight increase which is interpreted as decreasing level by the level sensor which then give a corresponding signal output.
Shown below is a typical displacer level sensor installation:

Although the basic theory of operation has been outlined above, in a practical displacer level sensor, the construction is engineered to achieve the desired measurement objective with sophisticated electronic circuitry. In these types of displacer level sensors, the displacer is attached to a spring which restricts its movement for each increment of buoyancy (i.e. level change). A transmitter incorporating a Linear Variable Differential Transformer (LVDT) is used to track the rise and fall of the displacer rod as liquid level changes. Sophisticated electronics is then used to process the voltage signal from the LDVT into a 4-20mA output signal.

Archimedes’ Principle Applied to the Displacer
According to Archimedes’ Principle, the buoyant force on an immersed object is always equal to the weight of the fluid volume displaced by the object.
Suppose in a displacer level sensor, we have a cylindrical displacer rod with density, ρ, radius, r, length, L, and process fluid of density, У. In this installation, the length of the displacer rod is proportional to the liquid level being measured:

Volume of displacer rod, $V = πr^2L$

Note that as shown in the diagram for the displacer level sensor installation above, when the vessel is full, the displacer rod is completely immersed in the process fluid hence the volume of process fluid displaced is $V = πr^2L$. When the level of the vessel is empty or minimum, volume of process fluid displaced is V = 0

Hence:
When vessel is full, buoyant force on displacer rod is given by:
Buoyant Force                =  weight of process fluid displaced

                                       =  $πr^2L Уg$                (g = acceleration due to gravity)
Real weight of displacer   = $πr^2L ρ g$
Net weight of displacer sensed by the LVDT, transmitter and associated electronics when vessel is full is:
$ =   πr^2L ρ g - πr^2L Уg = πr^2Lg(ρ – У) = Vg(ρ – У)$
As can be seen above, the net weight sensed by the LVDT is proportional to the difference in density of the displacer rod (ρ) and that of the process fluid (У)
Therefore, the displacer rod must have a higher specific gravity than that of the liquid level being measured and have to be calibrated for the specific gravity of the liquid. Typical specific gravity range for liquids where the displacer level sensor is applied is in the range of 0.25 to 1.5. Another point worth mentioning is that the range of the displacer level instrument is dependent only upon the length (L) of the displacer rod specified for the given application.

When vessel is empty or level is minimum,
Buoyant force on displacer = 0
Hence, weight sensed by the LVDT is $= πr^2L ρ g$
The LVDT registers a voltage signal at minimum vessel level and outputs a corresponding signal. The displacer length is determined by the operating range (span) specified, the specific gravity, pressure, and temperature of the process fluid. The diameter and weight are factory calculated to ensure correct operation and providing accurate 4-20mA output.

Areas of Application
The displacer level sensor is used in level measurement applications such as knock- out pots, condensate drums, separators, flash vessels, storage vessels and receiver tanks.




Basics of The Orifice Plate Flow Meter

Custom Search

As discussed in flow instrumentation: principles and formulas, when there is a flow restriction in a pipe, a differential pressure results which can then be related to the volumetric flow rate through the restriction.
The principle of a flow restriction creating a differential pressure is what is used in the orifice meter to measure the flow rate of liquids, steam and gases.
The orifice flow meter consists basically of:
(a) A primary device, the orifice plate that creates the flow restriction
(b) A secondary device that measures the differential pressure created by the orifice plate.




Flow Instrumentation: Principles and Formulas

Custom Search

The measurement of fluid flow is arguably the single most complex type of process variable measurement in all of industrial instrumentation. This is because there are vast array of flow metering technologies that can be used to measure fluid flow each one with its own limitations and individual characteristics. Even after a flow meter has been properly designed and selected for the process application and properly installed in the piping, problems may still arise due to changes in process fluid properties (density, viscosity, conductivity), or the presence of impurities in the process fluid. Flow meters are also subject to far more wear and tear than most other primary sensing elements, given the fact that a flow meter’s sensing element(s) must lie directly in the path of potentially abrasive fluid streams.

Given all these difficulties and complications of fluid flow measurement, it becomes imperative for any end user of any given flow meter technology to understand the complexities of flow measurement. What matters most is that you thoroughly understand the physical principles upon which each flow meter depends. If the “first principles” of each technology are understood, the appropriate applications and potential problems become much easier to identify and solved.
Here some basic principles and common formulas used in flow instrumentation are introduced.





Types of Orifice Plates Used in Flow Measurement

Custom Search

Orifice plates are the most widely used type of flow meters in the world today. They offer significant cost benefits over other types of flow meters, especially in larger line sizes, and have proved to be rugged, effective and reliable over many years. Where a need exists for a rugged, cost effective flow meter which has a low
installation cost and a turn-down of not more than 4:1, the orifice plate continues to offer a very competitive solution for flow rate measurement.

Here we introduce the most common types of orifice plates used in industrial process plants for flow measurement applications.

Types of Orifice Plate Designs
The orifice plate is a relatively inexpensive component of the orifice flow meter. Orifice plates are manufactured to stringent guidelines and tolerances for flatness bore diameter, surface finish, and imperfections in machining such as nicks and wire edges on the bore. Specific tolerances applicable to orifice plates in use in industrial applications are detailed in the American Gas Association (AGA) Reports especially report 3. The common designs of orifice plates available are:




Operating Principle of Capacitance Level Sensors

Custom Search

Capacitance level instruments operate on the basic principle of the variation of the electrical capacity or capacitance of a capacitor formed by the sensor, vessel wall and dielectric material. A capacitor is made up of two conductive plates which are separated from each other by a dielectric. The storage capability of a capacitor defined by the capacitance, C, is directly dependent on the plate areas (A), their distance apart (d) and the dielectric constant of the material between the plates:

C = ЄA/d






How to Calibrate a Pressure Gauge

Custom Search


In the plant, pressure gauge calibration is often taken for granted probably because they seem to be everywhere in the plant that one just assumes that some how the gauges are accurate even when they are out of calibration. A pressure gauge can be calibrated with a standard pneumatic calibrator, a dead weight tester or any other suitable calibrator.
There is no standard way to calibrate a pressure gauge.  The way a gauge is calibrated depends on the way the gauge is used. Here an outline is given on how a pressure gauge could be calibrated with any type of calibrator.




How to Select a Pressure Gauge

Custom Search

Mechanical pressure gauges, which require no external power, provide an affordable and reliable source of accurate pressure measurement. The need to select the right pressure gauge requires that the device be accurately specified otherwise
numerous problems might surface in the course of using a wrongly selected gauge. The factors discussed here are by no means exhaustive but they are the main things that will assist you to select the right type of pressure gauge. For the right specification and selection process, consult the manufacturer of the particular gauge you intend to use.

Pressure gauges should be selected by considering the factors below to prevent misapplication. Improper application can be detrimental to the gauge, causing failure and possible personal injury or property damage.

To select the right pressure gauge for your application, the following factors should be considered:

Gauge Accuracy
Pressure gauge accuracy ranges from grade 4A to D according to ASME 40.1. For a mechanical pressure gauge, accuracy is defined either as a percentage of the full-scale range or a percent of the span. As the accuracy increases, so does the price of the gauge. Therefore, the application where the gauge is required should be carefully considered before deciding on the accuracy of the gauge. While requirements differ from one industry to another, the following are general guidelines:
• Test Gauges and Standards: 0.25% through 0.10% full scale accuracies.
• Critical Processes: 0.5% full scale accuracy.
• General Industrial Processes: 1.0% accuracy. Less Critical Commercial Uses: 2.0% accuracy.





An Introduction to Pressure Gauges

Custom Search


A pressure gauge is a pressure sensor that is used to indicate the pressure of a given process or system. A pressure gauge usually refers to a self-contained indicator that converts the detected process pressure into the mechanical motion of a pointer. Depending on the reference pressure used, they could indicate absolute, gauge, and differential pressure. “Gauge” pressure is defined relative to




You May Also Like: