Capacitor Placement in a distribution system can reduce power losses in the electrical distribution feeder. In addition to loss reduction, there are also other benefits like, increase in receiving voltage profile, increase of equipment capacity, and more. However, if the placement of capacitors is not optimized, it can cause either, 1.) minimal loss reduction or 2.) incur additional loss due to KVAR overcompensation. The number two drawback of a not properly installed capacitors is the one that is very much avoided because it is like you are buying a hammer to strike in your own head.
So, how does this overcompensation occur?
Please refer in the following figure;
In an ideal case scenario, the capacitor size, and location is said to be optimal if the amount of current that it can injected in the load bus is equal to the imaginary part consumption of the load. In this case, the component of the current that will flow in the line is only the real part. But due to the non-linearity, varying loads, and complexity of the system, this case is very hard to achieve.
Now, referring again to the figure, what will happen if the capacitor was oversized? That is, if the load requires 5i Amps but the capacitor supplies 20i Amps. What will happen to the excess? Eventually, the 15i Amps excess is now, actually, the imaginary part of the line current. Initially, without capacitor, the line current is 10 + 5i, but upon the connection of capacitor, the new line current will now be, 10 + 5i - 20i, which is equal to 10 - 15i. If you get the magnitude of the current in both cases, the magnitude of the current in a case where a capacitor is installed is much higher than the magnitude of the current if no capacitor was installed. This is the case called overcompensation. Instead of reducing the magnitude of current, the capacitor however, further increased it, thus further increasing power losses in the line.This is one of the reasons why it is necessary to optimize the size and location of capacitors.