Before we delve into the tables that control engine operation, it is important to understand what the computer sees. The old saying, “garbage in, garbage out,” applies directly to EFI systems. More often than not, the cause for a poor-running car lies not in a bad calibration, but rather in a bad calculation. This is to say that engine output controls are formed based on inputs and calculations. A perfectly good calculation on a bad input parameter drives just as poorly as a poorly tuned car. Although not all sensors are absolutely critical to engine function or drivability, there are a handful that the EFI computer can’t live without. Likewise, sometimes we have redundant sensors where only one really impacts the final calculation of engine output controls and others are merely monitors.
This Tech Tip is From the Full Book, ENGINE MANAGEMENT: ADVANCED TUNING. For a comprehensive guide on this entire subject you can visit this link:
SHARE THIS ARTICLE: Please feel free to share this article on Facebook, in Forums, or with any Clubs you participate in. You can copy and paste this link to share: http://musclecardiy.com/performance/engine-measuring-efi-inputs-best/
The throttle position sensor (TPS) is one of the most critical inputs for any EFI system. Think of it as the volume knob on the stereo in form and function. The TPS tells the computer exactly where the throttle blade is so that it can be determined whether the driver is attempting to idle, cruise at a steady state, accelerate, or decelerate. Additionally, the rate and direction of change of this sensor helps the computer determine if the driver is attempting to change states. Most TPS sensors are basically a rotary potentiometer that varies output based on position around a dial. The further up the dial, the longer the electrical path gets through a set of resistors. We usually read output in a 0 to 5-volt scale with 0.5 to 1.0 v usually indicating closed throttle and 4.5 v or greater indicating wide open (WOT).
It is important to know exactly what the threshold is between closed throttle (C/T) and part throttle (P/T) for calibration to ensure that the computer actually uses the idle tables when the driver’s foot is off the pedal. It is a common mistake for a car owner to open the idle screw on the throttle body without checking TPS output. If the blade is opened beyond the C/T threshold to prevent stalling, the TPS must be readjusted to reflect the new closed throttle position. Likewise, a stretched cable or bent linkage may prevent the computer from seeing WOT even though the blade is 99% open. Since the effective flow area changes so little between 70% and 100% blade opening, many systems consider anything over 70% or so to be wide open depending on RPM.
Temperature of the engine itself is another extremely important monitoring point. This is a two-way street. Engines have a relatively narrow temperature band in which they operate most effectively. Too cold and fuel has trouble atomizing before the combustion process. Too hot and preignition in the chamber has almost identical negative effects to knock or expansion and distortion risk, warping critical sealing surfaces. Actual desired operating temperature depends upon desired engine usage. Most current OEM systems are thermostatically controlled to about 200 degrees F to allow for ideal combustion and emissions. Typically, a 20-degree drop to about 180 degrees F nets cooler chamber temperatures and allows those few extra degrees of spark advance that make more horsepower. Much like the mechanical choke on a carburetor, EFI systems allow for enrichment and added idle speed based at cold temperatures. Going too cool on the thermostat opening temperature can keep many OEM processors in the warm up routine skewing target fuel delivery and idle speed. A skilled calibrator can change the parameters that determine what is “warmed up” to avoid excess enrichment. However, intentionally running a cold engine and head temperature is usually reserved for drag race applications where emissions and cylinder wash from excess raw fuel are not a concern.
Knowing how much enrichment to add depends directly upon actual temperature. This comes from a sensor in either the coolant path or mounted directly in the cylinder head itself. These sensors are typically a basic thermistor with a resistance that varies directly with contact temperature. Most ECT sensors are of the negative temperature coefficient (NTC) type. NTC thermistors reduce in resistance as temperature increases. With a steady input voltage (usually 5 v) to the thermistor, increasing temperatures are read as higher return voltages from the sensor as a result of the dropping resistance thanks to Ohm’s Law.
If the sensor is placed in the coolant path, it is important that no air pockets are present. It is a common error to register a relatively cold input signal when the sensor is actually sitting in a steam pocket out of contact with actual engine temperatures. Since most EFI systems have safeguards in the code to allow for higher idle speed (more coolant circulation), increased electric fan activity, and richer fueling conditions at high engine temperatures, this input can be an engine saver. A cylinder head temperature sensor mounted directly to the casting reduces the chance of this error. The thing to keep in mind here is that cylinder head temperatures are typically 8 to 15 degrees F warmer than coolant temperatures at any given time due to conductivity.
Air temperature has a direct effect on density as well as burn rate. As air is heated it gains volume and loses density. In speed-density systems, this measurement is critical to determining exactly how many oxygen molecules are available for combustion in the current cycle. Since the engine displaces the same amount of manifold air in any given cycle, temperature directly affects the density (number of available oxygen molecules) actually making it into the chamber. On mass air measurement systems, air-inlet temperature does not directly adjust calculated charge fill for fuel calculations, but it is still used as a modifier to control burn characteristics. Inlet temperature is directly proportional to allowable spark advance assuming constant fuel delivery. Colder inlet temperatures mean more timing advance can be used, resulting in more available horsepower. Conversely, much hotter inlet temperatures (often a result of supercharging) require reduced timing to avoid the knock that comes from the increased burn speed.
Most inlet-temperature sensors are very similar in design and function to NTC coolant-temperature sensors. Since air density is so much lower than coolant density, sensor elements can be unshielded without risk of damage, making for faster response to changes in conditions. It is important for the calibrator to understand where the inlet temperature is being monitored on forced induction applications. While some OEM supercharged applications actually employ two inlet-temperature sensors to monitor both ambient conditions and actual inlet-port temperature, most systems only have one such input. Ideally, these sensors should be placed as near to the cylinder- head intake port as possible so that the computer sees actual charge temperature. This allows for tighter control in supercharged applications where charge temperatures vary from 90 degrees F ambient to over 300 degrees F in non-intercooled vehicles. Again, a skilled calibrator can compensate in the tune for a supercharged engine with an inlettemperature sensor installed ahead of the compressor with a shift in the temperature compensation function. By shifting this function and leaving an appropriate safety margin, everything can work just fine, but it’s easier to relocate the sensor for a more accurate signal.
Some systems add another sensor to monitor manifold-surface temperature. This allows for an additional routine in the computer to model heat transfer from the intake manifold to or from the intake charge immediately ahead of the cylinder. Again, this effect is more pronounced on speed-density systems where the actual air mass entering the cylinder must be calculated and temperature plays a bigger part in required fuel delivery. These sensors are usually almost identical in design and construction to the NTC coolant-temperature sensors.
Mass Air Flow
Since internal combustion engines are so sensitive to air/fuel ratio, it is important to know exactly how much of each component is entering the engine at any given time. The best way to ensure accurate fuel delivery is to have accurate air measurement before calculating anything. Speed-density systems are constantly making calculations of estimated airflow based on pressure, temperature, and engine speed. Mass air systems employ a sensor that directly measures air mass flow into the engine. Basically, the more air molecules moving past the sensor, the more the signal changes.
Early units used spring-loaded doors that were pushed further open by the force of the incoming air. The density and velocity of the incoming air is proportional to the force on the door. The doors of these sensors were then attached to a rotary potentiometer much like the throttle position sensor. The drawback to these units is the lack of temperature compensation. These early sensors require further calculation based on inlet temperature to determine the actual air mass.
Modern units use a small heated wire or film element suspended in the air stream. As more air molecules pass by the heated element, more current must pass through it to maintain a constant temperature. As current increases in the heated element, the output of the sensor changes. This heated element is cooled in proportion to the mass of air flowing over it, incorporating temperature compensation for a direct mass flow input to the PCM. For Ford vehicles, output is a 0 to 5 v signal. For GM vehicles, output is a 0 to 12,000 Hz wave. Either way, the transfer function ends up as an exponential curve with more resolution at low flow to accurately meter idle and cruise loads. It takes relatively large increases in airflow at the high end to make a change. Although the wire element itself is relatively small, it is placed in the meter housing such that the amount of air crossing the element is directly proportional to the total amount of air entering the engine. Most OEM mass airflow sensors are sized to be slightly smaller than the rest of the inlet plumbing in an effort to control velocity and increase accuracy.
A common mistake made by many performance enthusiasts is to cut out a portion of the mass airflow (MAF) sensor to improve total flow. While total flow is increased by doing this, the side effect is a change in the ratio of air across the metering element to total flow, reducing the output of the MAF sensor at any given actual flow rate. The net effect of this is a leaner engine-operation condition resulting from fuel calculations based on a lower airflow input to the computer. For example, cutting the center divider out of a GM LS1 MAF sensor typically shifts the output down by about 7%. The bigger concern is that this modification does not simply shift output down across the range, as low speed flow demonstrates a more pronounced effect. While many OEM systems run a safely rich air/fuel ratio at WOT, leaning out the mix by changing MAF output can lead to knock.
The problem that many performance enthusiasts encounter is that the OEMs have not intended the power (and total airflow) of the vehicle to be drastically increased. Most OEM mass airflow sensors are scaled to have the most resolution possible within the anticipated range of possible flow requirements. As airflow increases with increased power production, we often find that this anticipated maximum flow can be exceeded. The result is an OEM meter that reaches its maximum output before the engine peaks. This is often referred to as a “pegged MAF sensor.” Due to the limited measurement range and a typically small size, it is common to replace the OEM MAF sensor on performance vehicles. Aftermarket MAF sensors invariably have a different output from the original, even if only slightly. The ability to shift this output upward significantly makes their use almost mandatory on higher output engines. Once the OEM meter has been replaced, it is up to the calibrator to change the tables in the computer to reflect the new flow rates at each output point. If this new output curve is modeled correctly, the result is accurate measurement even at higher flow rates. As long as the computer knows exactly how much air mass is entering the engine, calculating the proper amount of fuel is relatively easy.
Aside from sensor flow and output range, the calibrator must also pay attention to the actual installation of the MAF sensor. Since we are ultimately dealing with airflow through a confined path and only metering a small portion of it to determine total flow, it is important to understand the effects of plumbing on meter outputs. The primary concern is the assumption of laminar flow across the area of the MAF sensor. If airflow across the sensor is greater on one side than another, the output can be skewed relative to the clocking of the metering element. The common source for such a problem is usually a bend in the intake plumbing immediately ahead of the MAF sensor. Since airflow is biased toward the outside radius of a bend, placing the metering element on the outside radius of a bend yields an output that indicates higher total flow than is currently present. The opposite is also true with reduced outputs resulting from element placement on the inside radius of a bend. Generally speaking, if the MAF must be placed following a bend in the inlet plumbing, it should be done such that the element is perpendicular to the plane of the bend. Air filter design or placement in the vehicle may also demonstrate a similar condition.
Constant diameter approaching the MAF sensor is also desirable. Remember that the same mass flow through a smaller diameter increases velocity. If the diameter of the inlet plumbing increases immediately ahead of the MAF sensor, it often results in an unstable charge flow with plenty of tumbling. This tumbling air may cause the wire element to read incorrectly depending on inlet flow and velocity. A length of straight, constant diameter pipe is preferred before any MAF sensor.
Typically this length should be at least double the diameter. This gives any tumbling flow an opportunity to stabilize before entering the sensor. Many OEM applications reduce clocking and velocity effects by integrating a laminar flow element into the MAF sensor assembly. The design of these flow straighteners ranges from simple wire screens to honeycombs with an exceptionally high Reynolds number. Many performance enthusiasts looking for extra power mistake the laminar flow element for a genuine restriction and remove them. Benefits of removal rarely justify the loss in metering accuracy and usually fall well below statistical accuracy of most chassis dynamometers. Typical changes I have seen from such modification rarely exceed 4 hp at the wheels even on modified V-8 engines. If a vehicle has a modified inlet system and erratic MAF sensor output, it is often helpful to install some sort of flow straightening. This can be done with a simple wire mesh or length of straight pipe leading into the MAF sensor.
The manifold absolute pressure (MAP) is a key element in speed-density systems. Charge density can be calculated by measuring the exact pressure in the intake manifold and knowing an accurate temperature. Air mass can be calculated accurately by knowing the system efficiency at a given point and multiplying that by the density and displacement. This gives the computer a number to work from for airflow and allow for proper fuel delivery calculations.
The manifold pressure itself is measured by a diaphragm that is directly acted upon by the air in the manifold, sometimes connected by a vacuum line. This diaphragm in turn acts directly upon a normal strain gauge. The range of these sensors is usually measured in bar or single atmospheres. That is, a onebar MAP sensor has a range from full vacuum to one atmosphere of pressure and is used in most naturally aspirated engines. A two-bar MAP sensor has a range from full vacuum to one bar above atmospheric or 14.7 psi of boost and so on. Manifold pressure is directly proportional to engine load, so it is used at the y-axis versus speed for the fuel and spark maps in most speed-density systems.
Barometric absolute pressure (BAP) sensor design and function is nearly identical to that of MAP sensors. The difference is that a BAP sensor is not connected to the intake manifold, but open to the air. The primary function of the BAP sensor is to help mass-air-based systems detect changes in altitude that require adjustments to calculated load and timing. Many modern mass air systems infer the barometric pressure based on MAF sensor output at startup.
The crankshaft or cam position (CKP) sensor is typically a two-piece system. Part one is a wheel installed on either the crankshaft or camshaft. This wheel has teeth or ridges made of a readily conductive metal, usually steel. Often, one or more of these teeth is offset. Part two of the sensor is a magnetic pickup (Hall effect sensor) designed to provide an output that fluctuates with the passing of each tooth or ridge on the signal wheel.
The CKP sensor performs two primary functions. First and most important is to report instantaneous engine speed. The second function, more critical to sequential systems, is to indicate actual rotational position. The location of the missing or extra tooth on the wheel allows the computer to synchronize the first fuel pulse and spark event with the number one cylinder. Additionally, if the computer fails to see the expected speed or acceleration from the CKP, it can recognize a misfire event. Knowing exactly how far past the last synchronization the misfire happens also correlates to a particular cylinder. This makes it easier for the computer to self-diagnose a bad individual plug, coil, or injector.
A less common sensor is the fuel rail delta pressure (FRP) sensor. Injector flow rate increases as pressure across the injector increases. This must be modeled to correctly calculate the amount of fuel actually making it into the intake port. Return fuel systems use a fixed ratio of fuel to intake pressure controlled by a mechanical regulator. In a returnless system, regulation is performed by varying the duty cycle of the fuel pump itself. This eliminates the need for a return line from the engine to the tank. It also eliminates the associated heat in the fuel and splashing upon return to the tank, and reduces evaporative emissions. The FRP sensor is used in returnless fuel systems to provide feedback to the computer for fuel pump control. Since returnless systems use a variable output fuel pump, the computer needs to know how much pressure is available across the injectors at any given time. The design and function of the FRP sensor is very similar to the MAP sensor. The difference here is that the FRP sensor is exposed to fuel under pressure from the rail on one side of its diaphragm and intake manifold pressure on the other. This means that only the difference (delta) between the two effects a change in the diaphragm’s position and signal output. For example, a supercharged engine with commanded rail pressure of 40 psi should show 50 psi on a common pressure sensor under 10 psi of engine boost, but a delta of 40 psi from the FRP sensor to maintain constant injector potential.
Although there is never an external sensor for voltage, all EFI systems have means of monitoring it. Power is required to operate the computer in the first place, so it is simple to monitor system voltage directly off the board. This voltage is used to model injector performance as well as ignition coil saturation and dwell time. Lower system voltages require more time for the field to build up in the transformer of the ignition coil as well as the electromagnet of the fuel injector.
Feedback control systems are as old as electronic fuel injection. Modern EFI systems have the ability to constantly correct for errors between desired and delivered air/fuel ratios. This is done by monitoring the oxygen content of the exhaust gases. Since combustion is just the chemical mixing of oxygen and hydrocarbon fuel, variances in the mix leave an inconsistency in the exiting molecules of the exhaust. An oxygen sensor (lambda sensor) is basically a “battery” that changes potential in the presence of oxygen. A layer of zirconium oxide (ZrO2) changes the sensor’s output voltage based on the partial pressures of the oxygen molecules. The more oxygen present, the lower the output voltage is. The computer uses this output to trim the commanded fuel delivery in what is called closed loop operation. This is where actual commanded injector pulsewidth is actively being adjusted based on feedback from the oxygen sensors. Although this correction is accurate under normal operating conditions, a cold sensor and relatively cold exhaust gases found at startup make for poor accuracy. This is why the computer typically ignores the sensor output for a brief period of time after start, allowing the sensor to reach operating temperature.
To improve accuracy of the measurement process, engineers discovered heat is a vital component. This is the reason oxygen sensors are currently installed as close to the cylinder head as possible and now include heating elements of their own to bring them up to operating temperature as quickly as possible after startup. These heated exhaust gas oxygen sensors are abbreviated as HEGOs. The time between startup and warmed up closed loop operation is critical to emissions since engines typically run slightly rich when cold to prevent stalling. Any reduction in this time period results in lower total hydrocarbon (unburned fuel) emissions.
Most OEM systems employ a standard HEGO sensor on each bank of cylinders with exceptional accuracy near stoichiometric combustion. These narrow band sensors are only accurate +/- about one air/fuel ratio from stoichiometric. This means that 13.0 may look “rich” to the HEGO, even though it is too lean to support the intended power level. This is why most EFI systems switch to open loop operation at WOT. It is a common error for inexperienced tuners to rely upon the factory narrow band HEGO when tuning for fuel monitoring under load. I have seen dozens of broken engines directly resulting from what the tuner thought was a rich condition when in fact it was merely richer than the accuracy range of the sensor.
Also important to observe is the effect of leaks and cam timing on HEGO readings. Any exhaust leak upstream or in the vicinity of the HEGO skews the readings lean. Since there are pressure pulses in the exhaust, any leak allows exhaust gases to pulse out and fresh air (containing plenty of oxygen) to pulse in. The result is an inordinate number of oxygen molecules passing by the HEGO, leading the computer to respond with longer injector pulses to compensate for what it thinks is a lean engine. This can lead to a circle of problems where the normally operating engine becomes overfueled, fouling the spark plugs. This in turn leads to a rich misfire condition sending more unburned oxygen past the sensor and the computer, again trimming the engine richer on the next cycle.
It is almost impossible to accurately calibrate an engine with exhaust leaks, since neither a standard HEGO nor a wideband yields correct results following such a leak. The solution here is to fix the mechanical problem before attempting to tune.
Likewise, a misfire event where the spark is being blown out before ignition also demonstrates a lean reading on the oxygen sensor output. The solution here is to increase ignition energy or shorten the plug gap to keep the spark kernel alive long enough to ignite the charge.
Cam timing can also cause strange outputs from the HEGO. Although increased overall cam duration adds power compared to stock camshafts, the added overlap can wreak havoc on HEGO control. The wash of unburned intake charge into the exhaust has the same effect as an exhaust leak. This effect usually diminishes at higher speeds and loads since ram tuning tends to feed more of the charge into the cylinder rather than out the exhaust valve. Depending on the amount of overlap in a camshaft, it may be necessary to ignore HEGO outputs at low load or even altogether for fuel trimming purposes.
Many aftermarket EFI systems incorporate more exotic wideband oxygen (WBO2) sensors with an accurate range between 8.0 and 30.0:1 air/fuel. This means that it becomes possible for the computer to actively correct fuel delivery at target ratios well outside the normal range as well as record them for later inspection. With improved accuracy over such varied conditions, the wideband oxygen sensor is a tremendously powerful tuning tool for any calibrator even if it is not integrated with the engine controls themselves. Just like with HEGOs, the wideband oxygen sensor must be kept hot to operate correctly and its output must be normalized with temperature for accuracy. Many less expensive standalone wideband sensors lack this temperature compensation circuit and pay for it in accuracy.
Exhaust backpressure can also be a concern for wideband oxygen sensor accuracy. Most wideband oxygen sensors are rated to operate between 0.8 and 1.3 bar with reasonable accuracy. Error increases when operated outside of the normal pressure range. There is no sensitivity to pressure at stoichiometric conditions. The pressure sensitivity becomes greater the further from stoichiometric the engine is operated. Increases in pressure make the sensor read further from stoichiometric (i.e., rich reads richer and lean reads leaner). For this reason the WBO2 should be installed after the turbocharger to improve accuracy at high load. Moving the sensor further away from the engine is not desirable for transient accuracy at lower speeds due to delay time, but the accuracy improvement during critical WOT fuel measurement should outweigh this. If high exhaust backpressure at the WBO2 installation point is unavoidable, an extra safety margin should be used on commanded fuel enrichment to compensate for this measurement error effect.
Some PCMs are equipped with special sensors to detect the presence of detonation. These knock sensors are basically a microphone with a bandpass filter to isolate the specific range of frequencies associated with knock. The microphone is attached to the engine block or head in the water jacket to take advantage of the faster speed of sound compared to airborne monitoring. Some systems are configured with a range as wide as 4 to 20 kHz, but many are calibrated to be more discriminating based on actual engine characteristics. Most engine knock occurs in the 10 to 12 kHz range. Even if the vehicle is not equipped with such a sensor, some calibrators choose to add their own monitoring circuit during the tuning process to aid in finding the actual detonation threshold of the engine.
In some instances, the changes to the valvetrain, pistons, or exhaust components can inadvertently trigger knock sensor output. GM ran into this problem with the LT4 Corvette engine and countered with a recalibrated sensor. Hard contact between the powertrain (including the exhaust system) and chassis can also be conducted back to the knock sensor, triggering a “false” knock signal. The desired solution is to fix the interference condition rather than desensitize the knock circuit. In some cases where valvetrain or piston noise cannot be avoided, the last resort is to disable the knock circuit only after careful spark calibration to avoid genuine knock that would now go undetected by the disabled sensors.
Written by Greg Banish and Posted with Permission of CarTechBooks