<Nathan>
--
: <Nathan>
: --
The charging time is given by the capacity in Ah divided by the charging rate.
Thus a 1200 mAh cell at 120mA takes 10 hours to fully charge.
Putting cells in series has no effect on the charging time. It is not a good idea to put them
in parallel though as if one goes short cct the others will discharge through it - Ever seen a NiCad
melt down?????
In theory you can charge at any current and adjust the time accordingly, however to prevent the
cell getting too hot it is usually best to stay below (capacity/10).
At a charge rate of capacity/16 or less most cells can be left on charge indefinitely. with the
exception of button cells where it is best to stay below capacity/100.
Hope this helps
Carl
Electrical Eng. University of Liverpool U.K.
Does this mean that chargers are normally designed as constant current
sources? And are you saying that pumping a constant current into the
battery even after it's charged is OK?
I thought I read somewhere that leaving NiCad batteries in the charger
for an extended period of time was bad. Is this not really true if
the current is low enough? Is it bad only because the energy is turned
into heat, and the extened exposure to the heat damages the battery some
how?
So can something simple like this work ok as a charger for a single
NiCad (1.2v?) cell?
+----/\/\/\/\---+
| R |
|+ +----> +
6V To cell to be charged
|- +----> -
| |
+---------------+
Where you calculate R to produce a reasonable current for both a dead
and a fully charged battery? Say if you had a 1200 mhr cell and you
wanted it to charge at 75 ma (1200/16) when charged, then use R of
(6-1.2)/.075 or 64 ohms. This would produce a max current of 6/64
(94ma) which seems safe enough.
So is this all it takes to make a charger? Granted I see many improvements
like a tranistor to reduce the power wasted in R, or some type of circuit
to prevent you from damaging a cell you connected backwards to the
charger, but otherwise, should the above circuit work OK?
Curt Welch