Why all the arguing over throttle suctioning loss, as it is minuscule in practical application ?
Because the inefficiency caused by the throttle plate itself is actually a % based on mass of air-flow over a time period, the only time it becomes significant would be in a condition where the RPM is high and the throttle position is very low, which most of us do not often encounter in our normal driving routine.
For this reason, doing coast-down tests from high RPM in gear would actually not be very revealing in terms of predicting actual efficiency loss during normal driving.
You would be creating an unusually low level of pressure in the intake ports, while at the same time moving a relatively large mass of air past the throttle over a given time period. This would boost the % of inefficiency by an order of magnitude over what you could expect during normal cruising.
Another thing I must point out if you're thinking about reducing pumping loss while in DFCO mode is that one of the thresholds for DFCO operation is manifold pressure. If you raise the manifold pressure while in DFCO, then you are likely to cause the system to kick out of DFCO mode and inject fuel again, the ECM thinking that you may have opened the throttle.
In the past people have tried to reduce throttling loss during DFCO by using the IAC throttle-follower routine in the ECM, where IAC steps are added when coasting, or they have tried to initiate EGR operation during coasting, but both methods have failed in improving efficiency, and often have caused other problems like the engine dying at low loads, and the above mentioned kicking out of DFCO.
At the end of the day the engine is going to have move X-amount of mass from the atmosphere into the cylinder to make X-amount of power. You have to initiate a pressure drop to move this mass, and it's either going to be primarily at the intake valve, or it's going to be a combination of intake valves and throttle valve. As you open the throttle and reduce the pressure differential there locally, the efficiency loss due to the throttle decreases, and at the same time pressure differential increases at the intake valve and the efficiency drop at the intake valve increases, in sort of a proportional trade-off. Either way, both are very minuscule losses during normal driving conditions where you must control the engine's VE due to the type of fuel you are using.
No matter where or how you are getting your mechanically created pressure differential, it is going to cost you some efficiency that you will not be able to recover. The good news is that what little is lost at the throttle is more than made up for by the throttle providing very good resolution for the driver to use to better match engine VE to the driving conditions. We all know that a good driver can use throttle control to improve FE.
Also, FYI, an analysis of cylinder pressure in your average non-running engine will not be representative of actual pressure differentials in a running engine. There are a few reasons for this. The first is that the pressure measured in the chamber is subject to adjacent pressure changes in neighboring cylinders via both the shared intake plenum and via a shared exhaust manifold. There is actually a high level of back-pumping or reversion into the intake manifold during very low RPM operation, such as cranking speeds. This is observable by watching a vacuum gauge or MAP sensor output during the operation. This source of error could have been eliminated by performing a test on a single cylinder engine.
The next issue is that extreme pulsing in the ports (compression waves) combined with reversion in a running engine greatly effects localized pressure differentials, such as the one that was trying to be analyzed in the compression test by using the average results measured in the chamber. Just as the actual pumping loss in a running cylinder for removing the exhaust charge is much less than what simple calculations would predict, the same is true for the intake cycle, but for different reasons that change depending on varying driving conditions, none of which a non-running compression test does a good job of emulating.
What it (the compression test) does show is simply the driver's ability to control the VE of the engine via throttle modulation. The calculated energy lost from the throttle being closed could only be applied correctly to a non-running engine being cranked, or similarly to a non-combustion type of device, like electric or hand-operated pump. Once combustion starts, the conditions of operation are changed dramatically.
All of this is getting further off-track from the BSFC subject.
I has yet to be mentioned here, but a big aspect of what determines where the BSFC falls on an RPM chart is the particular tune parameters of the engine, the spark advance having far more impact than the AFR alone. Given this fact, the better match between the charge burn-speed and the corresponding piston speed will have at least as much affect on BSFC as the restricted VE and/or the thermal rejection rates.
Simply put, the burn-speed needs to closely match the piston speed in order to achieve best efficiency. That's what tuning is all about.
The OBD1 and OBD2 ECM & PCMs do NOT contain an adaptive spark logic that adjusts to find maximum combustion speed, fuel efficiency, or best BSFC. The logic is programmed to limit the range of spark advance to a particular window of operation that, along with EGR function, is designed to reduce combustion speed, temp, and efficiency, and in the process compromises BSFC, and shifts best BSFC to a relatively higher RPM range, which is also not good for fuel efficiency.
So there you have it. The EPA regulations have far more impact in the BSFC you can legally achieve than other types of issues such as throttle suctioning losses.
|