Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I2C Single Master: peripheral or bit banging?

167 views
Skip to first unread message

pozz

unread,
Nov 20, 2020, 3:43:38 AM11/20/20
to
I hate I2C for several reasons. It's only two-wires bus, but for this
reason it is insidious.

I usually use hw peripherals when they are available, because it's much
more efficient and smart and because it's the only possibility in many
cases.
Actually we have MCUs with abundant UARTs, timers and so on, so there's
no real story: choose a suitable MCU and use that damn peripheral.
So I usually start using I2C peripherals available in MCUs, but I found
many issues.

I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases
the I2C peripheral is much more complex than UART or similar serial
lines. I2C Single Master, that is the most frequent situation, is very
simple, but I2C Multi Master introduces many critical situations.
I2C peripherals usually promise to be compatible with multi-master, so
their internal state machine is somewhat complex... and often there's
some bug or situations that aren't expected that leave the code stucks
at some point.

I want to write reliable code that not only works most of the time, but
that works ALL the time, in any situations (ok, 99%). So my first test
with I2C is making a temporary short between SCL and SDA. In this case,
I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever.
The manual says to write ADDR register to start putting the address on
the bus and wait for an interrupt flag when it ends. This interrupt is
never fired up. I see the lines goes down (because START bit is putting
low SDA before SCL), but the INTFLAG bits stay cleared forever. Even
error bits in STATUS register (bus error, arbitration lost, any sort of
timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short
is removed, the state-machine goes on.

Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino
Wire libraries[2]. In both cases, a timeout is implemented at the driver
level.

Even the datasheet says:

"Note:  Violating the protocol may cause the I2C to hang. If this
happens it is possible to recover from this state by a
software reset (CTRLA.SWRST='1')."

I think the driver code should trust the hw, between them there's a
contract, otherwise it's impossibile. For a UART driver, you write DATA
register and wait an interrupt flag when a new data can be written in
the register. If the interrupt nevers fire, the driver hangs forever.
But I have never seen a UART driver that uses a timeout to recover from
a hardware that could hang. And I used UARTs for many years now.


Considering all these big issues when you want to write reliable code,
I'm considering to wipe again the old and good bit banging technique.
For I2C Single Master scenario, it IS very simple: put data low/high
(three-state), put clock low/high. The only problem is to calibrate the
clock frequency, but if you a free timer it will be simple too.

What is the drawback of bit banging? Maybe you write a few additional
lines of code (you have to spit off 9 clock pulses by code), but I don't
think much more than using a peripheral and protect it with a timeout.
But you earn a code that is fully under your control and you know when
the I2C transaction starts and you can be sure it will end, even when
there are some hw issues on the board.





[1]
https://github.com/avrxml/asf/blob/68cddb46ae5ebc24ef8287a8d4c61a6efa5e2848/sam0/drivers/sercom/i2c/i2c_sam0/i2c_master.c#L406

[2]
https://github.com/acicuc/ArduinoCore-samd/commit/64385453bb549b6d2f868658119259e605aca74d

Bernd Linsel

unread,
Nov 20, 2020, 5:38:20 AM11/20/20
to
1. The interrupt will only fire if a connected slave acknowledges the
address. If you want to catch the situation of a non-acknowledged start
& address byte, you have to set up a timer that times out.

2. I²C is asynchronous, you don't need to keep a fixed bit rate. Just
pulse SCL as fast as you can/need (within the spec). Clients can adjust
the speed pulling down SCL when they can't keep up with the master's speed.

3. As you not only have to bit-bang SCL & SDA according to the protocol,
but also monitor SCL before you go on, even implementing a s/w I²C
master correctly is not trivial; additionally, the CPU load is remarkable.

If you have difficulties using the I²C peripherals, just have a peek on
the according linux driver sources. They are often very reliable (at
least for chip families that are a bit mature), and if there exist any
issues, they are documented (cf. Raspi's SPI driver bug in the first
versions).

Regards
Bernd


pozz

unread,
Nov 20, 2020, 6:45:24 AM11/20/20
to
False, at least for SERCOM in I2C Master mode (but I suspect other MCUs
behaviour is the same).
Quoting from C21 datasheet:

"If there is no I2C slave device responding to the address packet,
then the INTFLAG.MB interrupt flag and
STATUS.RXNACK will be set. The clock hold is active at this point,
preventing further activity on the bus."


> 2. I²C is asynchronous, you don't need to keep a fixed bit rate. Just
> pulse SCL as fast as you can/need (within the spec). Clients can adjust
> the speed pulling down SCL when they can't keep up with the master's
speed.
Yes, I know. But you need some pause in bit banging, otherwise you will
run at a speed too high. And you need a calibration if you use a dumb
loop on a volatile counter.


> 3. As you not only have to bit-bang SCL & SDA according to the protocol,
> but also monitor SCL before you go on, even implementing a s/w I²C
> master correctly is not trivial; additionally, the CPU load is
remarkable.
If I'm not wrong, this happens only if you have some slaves that stretch
the clock to slow down the transfer. Even in this case, I don't think
monitoring SCL line during the transfer is complex. Yes, you should have
a timeout, but you need it even when you use hw peripheral.

CPU load? Many times I2C is used in a blocking way waiting for interrupt
flag. In this case, there's no difference if the CPU waits for an
interrupt flag or drive SCL and SDA lines.

Even if you need non-blocking driver, you could use a hw timer and bit
bangs in the interrupt service routine of the timer.


> If you have difficulties using the I²C peripherals, just have a peek on
> the according linux driver sources. They are often very reliable (at
> least for chip families that are a bit mature), and if there exist any
> issues, they are documented (cf. Raspi's SPI driver bug in the first
> versions).
I'm talking about MCUs. Linux can't run on these devices.


Richard Damon

unread,
Nov 20, 2020, 8:09:21 AM11/20/20
to
I think the issue is by what you did when the controller started the
cycle and issued a start bit to 'get' the bus, it sees that 'someone'
else did the same thing but got farther.

A Start bit is done by, with SCL and SDA both high, first pull SDA low,
and then SCL low a bit later. When the controller pulls SDA low, it then
looks and sees SCL already low, so it decides that someone else beat it
to the punch of getting the bus, so it backs off and waits. I suspect
that at that point it releases the bus, SDA and SCL both go high at the
same time (which is a protocol violation) and maybe the controller sees
that as a stop bit and the bus now free, so it tries again, or it just
thinks the bus is still busy.

This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
the case where you think you won the arbitration, but at the same time
somoeone else also thought they won it, and while sending a bit, you
find that your 1 bit became a 0 bit, so you realize (late) that you had
lost the arbitarion, and thus need to abort your cycle and resubmit it.

This is a case of arbitration never won, and most devices will require
something external to the peripheral to supply any needed timeout mechanism.

Most bit-banged master code I have seen, assumes single-master, as it
can't reliably test for this sort of arbitration lost condition, being a
bit to slow.

pozz

unread,
Nov 20, 2020, 9:15:14 AM11/20/20
to
Il 20/11/2020 14:09, Richard Damon ha scritto:
> I think the issue is by what you did when the controller started the
> cycle and issued a start bit to 'get' the bus, it sees that 'someone'
> else did the same thing but got farther.
>
> A Start bit is done by, with SCL and SDA both high, first pull SDA low,
> and then SCL low a bit later. When the controller pulls SDA low, it then
> looks and sees SCL already low, so it decides that someone else beat it
> to the punch of getting the bus, so it backs off and waits. I suspect
> that at that point it releases the bus, SDA and SCL both go high at the
> same time (which is a protocol violation) and maybe the controller sees
> that as a stop bit and the bus now free, so it tries again, or it just
> thinks the bus is still busy.
No, SCL and SDA stay low forever. Maybe it drives low SDA, than SCL,
than releases one of SCL or SDA, failing in that.


> This is NOT the I2C 'Arbitration Lost' condition, as that pertains to
> the case where you think you won the arbitration, but at the same time
> somoeone else also thought they won it, and while sending a bit, you
> find that your 1 bit became a 0 bit, so you realize (late) that you had
> lost the arbitarion, and thus need to abort your cycle and resubmit it.
Ok, call it bus error, I2C violation, I don't know. The peripheral is
full of low-level timeouts and flags signaling that some strange happened.
But shorting SDA and SCL will not set any of this bit.


> This is a case of arbitration never won, and most devices will require
> something external to the peripheral to supply any needed timeout
mechanism.
At least the peripheral should be able to report the strange bus state,
but it STATUS.BUSTATE is always IDLE.


> Most bit-banged master code I have seen, assumes single-master, as it
> can't reliably test for this sort of arbitration lost condition, being a
> bit to slow.
Of course, take a look at the subject of my post.

Stephen Pelc

unread,
Nov 20, 2020, 9:54:04 AM11/20/20
to
On Fri, 20 Nov 2020 09:43:32 +0100, pozz <pozz...@gmail.com> wrote:

>Considering all these big issues when you want to write reliable code,
>I'm considering to wipe again the old and good bit banging technique.
>For I2C Single Master scenario, it IS very simple: put data low/high
>(three-state), put clock low/high. The only problem is to calibrate the
>clock frequency, but if you a free timer it will be simple too.
>
>What is the drawback of bit banging? Maybe you write a few additional
>lines of code (you have to spit off 9 clock pulses by code), but I don't
>think much more than using a peripheral and protect it with a timeout.
>But you earn a code that is fully under your control and you know when
>the I2C transaction starts and you can be sure it will end, even when
>there are some hw issues on the board.

The big advantage of bit banging is reliability. I2C is an
edge-triggered protocol. In our experience, some I2C peripherals
are very prone to error or lockup on fast noise pulses.

A client with a train control application carefully wrote an I2C
peripheral driver. On test, it failed a few times a day. As a
reference, the client replaced the driver with our old bit-bang
driver. In two weeks, there were no failures.

Yes, a bit-bang driver needs to be carefully designed if CPU load
is an issue. Choice of buffer chips can be useful in a high noise
environment, e.g. hospital autoclave with switched heating elements.

Stephen


--
Stephen Pelc, ste...@vfxforth.com <<< NEW
MicroProcessor Engineering Ltd - More Real, Less Time
133 Hill Lane, Southampton SO15 5AF, England
tel: +44 (0)23 8063 1441, +44 (0)78 0390 3612
web: http://www.mpeforth.com - free VFX Forth downloads

John Speth

unread,
Nov 20, 2020, 9:56:16 AM11/20/20
to
> What is the drawback of bit banging?

If the MCU provides all the necessary capability to bit bang, there is
no downside for single cycle operations. I've bit banged I2C in my
career a handful of times just because I was too lazy to learn about the
I2C peripheral. My plan was always to replace the bit banging with the
peripheral *if necessary*. Usually data throughput requirements drive
whether or not I will use the peripheral instead of bit banging. You can
get great speed and efficiency improvements using the I2C peripheral
with DMA.

There is no shame in implementing bit banging.

You might encounter an I2C device that requires full or partial bit
banging. For example, I encountered a device that issued a non-I2C pulse
during some part of an I2C transaction series. The pulse represented the
completion of an internal ADC voltage conversion and indicated it was
time to collect the data value. I was on the fence on whether to bit
bang or use the peripheral hardware.

JJS

Bernd Linsel

unread,
Nov 20, 2020, 9:57:31 AM11/20/20
to
On 20.11.2020 12:45, pozz wrote:

> [...]
>
> > If you have difficulties using the I²C peripherals, just have a peek on
> > the according linux driver sources. They are often very reliable (at
> > least for chip families that are a bit mature), and if there exist any
> > issues, they are documented (cf. Raspi's SPI driver bug in the first
> > versions).

> I'm talking about MCUs. Linux can't run on these devices.

I'm fully aware of that. But linux drivers often disclose some h/w
caveats and workarounds, or efficient strategies in dealing with the
peripheral's peculiarities...

Regards
Bernd


Tauno Voipio

unread,
Nov 20, 2020, 10:25:46 AM11/20/20
to
If you connect SCL and SDA together, you'll create a permanent
protocol violation. The whole I2C relies on both being separate
and open-collector/drain. Creating an unexpected short creates
a hardware failure. If ouy're afraid of such a situation, you should
test for it bit-banging before initializing the hardware controller.

--

-TV

pozz

unread,
Nov 20, 2020, 11:33:23 AM11/20/20
to
I know that and I don't expect it works in this situation, but my point
is another.

If a I2C hw peripheral could hang for some reason (in my test I
volunteerly made the short, but I imagine the hang could happen in other
circumstances that is not well declared in the datasheet), you should
protect the driver code with a timeout.
You have to test your code in all cases, even when the timeout occurs.
So you have to choose the timeout interval with great care, you have to
understand if the blocking for so long is acceptable (even in that rare
situation).

Considering all of that, maybe bit-banging is much more simple and reliable.

Tauno Voipio

unread,
Nov 20, 2020, 12:39:15 PM11/20/20
to
I have had thousands of industrial instruments in the field for decades,
each running some internal units with I2C, some bit-banged and others
on the hardware interfaces on the processors used, and not a single
failure due to I2C hanging.

Please remember that the I2C bus is an Inter-IC bus, not to be used for
connections to the outside of the device, preferably only on the same
circuit board. There should be no external connectors where e.g. the
shorts between the SCL and SDA could happen.

All the hardWare I2C controls have been able to be restored to a
sensible state with a software reset after a time-out. This includes
the Atmel chips.

--

-TV

Dimiter_Popoff

unread,
Nov 20, 2020, 4:25:25 PM11/20/20
to
On 11/20/2020 19:39, Tauno Voipio wrote:
> On 20.11.20 18.33, pozz wrote:
>> Il 20/11/2020 16:25, Tauno Voipio ha scritto:
>>  > On 20.11.20 16.15, pozz wrote:
>>  >> Il 20/11/2020 14:09, Richard Damon ha scritto:
>> .....
>>
>> Considering all of that, maybe bit-banging is much more simple and
>> reliable.
>
>
> I have had thousands of industrial instruments in the field for decades,
> each running some internal units with I2C, some bit-banged and others
> on the hardware interfaces on the processors used, and not a single
> failure due to I2C hanging.
>
> Please remember that the I2C bus is an Inter-IC bus, not to be used for
> connections to the outside of the device, preferably only on the same
> circuit board. There should be no external connectors where e.g. the
> shorts between the SCL and SDA could happen.
>
> All the hardWare I2C controls have been able to be restored to a
> sensible state with a software reset after a time-out. This includes
> the Atmel chips.
>

I did manage once to upset (not to hang) an I2C line. I had routed
SCL or SDA (likely both, don't remember) routed quite close to the
switching MOSFET of a HV flyback which makes nice and steep 100V
edges... :-).

I have dealt with I2C controllers on 2 parts I can think of now
and both times it took me a lot longer to get them to work than it
had taken me on two earlier occasions when I bit banged it though...
They all did work of course but the design was sort of twisted,
I remember one of them took me two days and I was counting minutes
of my time for that project. It may even have been 3 days, was
10 years ago.

Dimiter

======================================================
Dimiter Popoff, TGI http://www.tgi-sci.com
======================================================
http://www.flickr.com/photos/didi_tgi/




Grant Edwards

unread,
Nov 20, 2020, 6:09:45 PM11/20/20
to
On 2020-11-20, Dimiter_Popoff <d...@tgi-sci.com> wrote:

> I have dealt with I2C controllers on 2 parts I can think of now
> and both times it took me a lot longer to get them to work than it
> had taken me on two earlier occasions when I bit banged it though...

That's my experience also. I've done bit-banged I2C a couple times,
and it took about a half day each time. Using HW I2C controllers has
always taken longer. The worst one I remember was on a Samsung ARM7
part from 20 years ago. Between the mis-translations and errors in the
documentation and the bugs in the HW, it took at least a week to get
the I2C controller to reliably talk to anything.

--
Grant


Michael Kellett

unread,
Nov 21, 2020, 4:10:33 AM11/21/20
to
If you do a bit banged interface do not forget to support clock
stretching by the slave.
Do not assume that the slave has no special timing requirements.
To do it right you need a hardware timer (or a cast iron guarantee that
the bit bang function won't be interrupted).

I've found hardware I2C controllers on micros to be 100% reliably a
problem. The manufacturers drivers are often part of that problem.

I'm currently trying to debug some one else's not working implementation
of an ST I2C peripheral controller. It uses ST's driver.

MK

Tauno Voipio

unread,
Nov 21, 2020, 4:46:16 AM11/21/20
to
To add to that, the drivers by the hardware makers are also quite
twisted and difficult to integrate to surrounding software.

With ARM Cortexes, I'm not veery fascinated with the provided
drivers in CMSIS. Every time I have ended writing my own.

--

-TV

Tauno Voipio

unread,
Nov 21, 2020, 4:48:26 AM11/21/20
to
I have ended up jettisoning both ST's and Atmel's drivers and written
my own. You might consider that way.

--

-TV

David Brown

unread,
Nov 21, 2020, 6:07:03 AM11/21/20
to
On 20/11/2020 18:39, Tauno Voipio wrote:

> I have had thousands of industrial instruments in the field for decades,
> each running some internal units with I2C, some bit-banged and others
> on the hardware interfaces on the processors used, and not a single
> failure due to I2C hanging.
>

The only time I have seen I²C buses hanging is during development, when
you might be re-starting the cpu in the middle of an operation without
there being a power-on reset to the slave devices. That can easily
leave the bus in an invalid state, or leave a slave state machine out of
synchronisation with the bus. But I have not seen this kind of thing
happen in a live system.

pozz

unread,
Nov 22, 2020, 11:35:06 AM11/22/20
to
If the slave should use clock streching, I think the datasheet would say
this clearly.

> Do not assume that the slave has no special timing requirements.
> To do it right you need a hardware timer (or a cast iron guarantee that
> the bit bang function won't be interrupted).

Please, explain. I2C is synchronous to clock transmitted by the Master.
Of course Master should respect a range for the clock frequency (around
100kHz or 400kHz), but I don't think a jitter on the I2C clock, caused
by an interrupt, could be a serious problem for the slave.

pozz

unread,
Nov 22, 2020, 11:48:54 AM11/22/20
to
Il 21/11/2020 12:06, David Brown ha scritto:
> On 20/11/2020 18:39, Tauno Voipio wrote:
>
>> I have had thousands of industrial instruments in the field for decades,
>> each running some internal units with I2C, some bit-banged and others
>> on the hardware interfaces on the processors used, and not a single
>> failure due to I2C hanging.
>>
>
> The only time I have seen I²C buses hanging is during development, when
> you might be re-starting the cpu in the middle of an operation without
> there being a power-on reset to the slave devices. That can easily
> leave the bus in an invalid state, or leave a slave state machine out of
> synchronisation with the bus. But I have not seen this kind of thing
> happen in a live system.
In the past I had a big problem with a I2C bus on a board. Ubiquitous
EEPROM 24LC64 connected to a 16-bits MCU by Fujitsu. In that case, I2C
was implemented by a bit-bang code.

At startup MCU read the EEPROM content and if it was corrupted, factory
default are used and wrote to the EEPROM. This mechanism was introduced
to write a blank EEPROM at the very first power up of a fresh board.

Unfortunately it sometimes happened that the MCU reset in the middle of
a I2C transaction with the EEPROM (the reset was caused by a glitch on
the power supply that triggered a MCU voltage supervisor).
When the MCU restarted, it tried to communicate with the EEPROM, but it
was in a not synchronized I2C state. This is well described in AN868[1]
from Analog Devices..

The MCU thoughts it was a blank EEPROM and factory settings was used,
overwriting user settings! What the user blamed was that the machine
sometimes restarted with factory settings, losing user settings.

In that case the solution was adding a I2C reset procedure at startup
(some clock pulses and STOP condition as described in the Application Note).
I think this I2C bus reset procedure must be always added where there's
a I2C bus and most probably it must be implemented by a big-bang code.


[1]
https://www.analog.com/media/en/technical-documentation/application-notes/54305147357414AN686_0.pdf

David Brown

unread,
Nov 22, 2020, 1:45:04 PM11/22/20
to
Sure, add that kind of a reset at startup - it also helps if you are
unlucky when restarting the chip during development.

Also make sure you write two copies of the user data to the EEPROM, so
that you can survive a crash while writing to it.

But if your board is suffering power supply glitches that are enough to
trigger the MCU brown-out, but not enough to cause a proper re-start of
the rest of the board, then /that/ is a major problem that you should be
trying to solve.

Michael Kellett

unread,
Nov 22, 2020, 3:21:52 PM11/22/20
to
Jitter should not be a problem, but I have known slave devices go to
sleep after a few ms of inactivity on the bus.

I've seen many bit banged attempts using processor loops for delays.
Then something changes (compiler upgrade, faster clock, whatever) and
then it all goes wrong.


MK


pozz

unread,
Nov 23, 2020, 2:44:43 AM11/23/20
to
Yes, but it's not simple. As described in AN686, I2C EEPROMs doesn't
have a reset, so it's impossible for the MCU to reset the EEPROM at
startup. The only solution is to introduce dedicated hw to remove and
reapply power supply.

This is the main reason I now prefer to use SPI EEPROM (when possible),
because slave-select signal of the SPI bus restart automatically the
transaction.

David Brown

unread,
Nov 23, 2020, 3:42:14 AM11/23/20
to
I too prefer SPI - it is often simpler and can be much faster.

However, if it is possible for your power supply to glitch and reset the
MCU, without being turned off properly (and therefore reseting your
eeprom), you have a definite hardware problem on the board. (Of course,
boards vary in how much effort and money you are willing to spend to get
stability, and how unstable a supply you might have.)


David Brown

unread,
Nov 23, 2020, 3:44:59 AM11/23/20
to
On 23/11/2020 08:44, pozz wrote:
> Il 22/11/2020 19:44, David Brown ha scritto:

>> But if your board is suffering power supply glitches that are enough to
>> trigger the MCU brown-out, but not enough to cause a proper re-start of
>> the rest of the board, then /that/ is a major problem that you should be
>> trying to solve.
>
> Yes, but it's not simple. As described in AN686, I2C EEPROMs doesn't
> have a reset, so it's impossible for the MCU to reset the EEPROM at
> startup. The only solution is to introduce dedicated hw to remove and
> reapply power supply.
>
> This is the main reason I now prefer to use SPI EEPROM (when possible),
> because slave-select signal of the SPI bus restart automatically the
> transaction.
>
I hit "send" before I included a final point - the "dedicated hardware
to remove the power supply to the eeprom" is usually just a GPIO pin
from the microcontroller. Often a GPIO pin can easily supply the
current needed for a low power device like an eeprom, so that you don't
need anything else.

Richard Damon

unread,
Nov 23, 2020, 7:25:06 AM11/23/20
to
One thing to note about the I2C bus protocol, is that a High to Low
transition of the SDA line when SCL is high (a Start Bit) is supposed to
'Reset' the communication channel of every device on the bus and put it
in the mode to compare the next 8 bits as a Device Address.

Thus, if at the 'random' reset, no device is driving the SDA line high,
then as soon as the master starts a new cycle, everything is back in sync.

It is possible, that a device is either doing an ACK or a Read Cycle at
the point of reset, holding SDA low. The master just needs to cycle SCL
until SDA goes high by completing that ack or read cycle. The Read might
need up to 8 clocks. One we have SDA and SCL high, we can generate the
Start to get everyone listening.

Devices should not be holding SCL low for extended periods, so that
shouldn't be a problem (or is a problem of a different nature if you do
have an oddball infinite bus extender).

Reinhardt Behm

unread,
Nov 24, 2020, 5:15:24 AM11/24/20
to
On 11/23/20 8:25 PM, Richard Damon wrote:
> One thing to note about the I2C bus protocol, is that a High to Low
> transition of the SDA line when SCL is high (a Start Bit) is supposed to
> 'Reset' the communication channel of every device on the bus and put it
> in the mode to compare the next 8 bits as a Device Address.

Also a sequence of >= 10 clock pulses is supposed to get devices into
normal function.

Unfortunately not all devices have read this spec.

I had one (LM75) which could be driven reproducible into a
non-responding mode by a short glitch on data or clock. The only way to
recover was a power cycle.

Mike Perkins

unread,
Nov 25, 2020, 12:43:55 PM11/25/20
to
On 20/11/2020 08:43:32, pozz wrote:
> I hate I2C for several reasons. It's only two-wires bus, but for this
> reason it is insidious.
>
> I usually use hw peripherals when they are available, because it's much
> more efficient and smart and because it's the only possibility in many
> cases.
> Actually we have MCUs with abundant UARTs, timers and so on, so there's
> no real story: choose a suitable MCU and use that damn peripheral.
> So I usually start using I2C peripherals available in MCUs, but I found
> many issues.
>
> I have experience with AVR8 and SAMC21 by Atmel/Microchip. In both cases
> the I2C peripheral is much more complex than UART or similar serial
> lines. I2C Single Master, that is the most frequent situation, is very
> simple,

I don't have much experience in these micros, mainly ST ARM devices. But
single master applications can be very reliable, as long as you check
all the status bits. Some examples only look for patterns that could
leave a hanging peripheral.

> but I2C Multi Master introduces many critical situations.
> I2C peripherals usually promise to be compatible with multi-master, so
> their internal state machine is somewhat complex... and often there's
> some bug or situations that aren't expected that leave the code stucks
> at some point.

This is where I diverge. I would not choose to use I2C in a multi-master
system.

I2C was always intended to be used on a single PCB, without long wires
attracting EM noise.

> I want to write reliable code that not only works most of the time, but
> that works ALL the time, in any situations (ok, 99%). So my first test
> with I2C is making a temporary short between SCL and SDA. In this case,
> I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs forever.
> The manual says to write ADDR register to start putting the address on
> the bus and wait for an interrupt flag when it ends. This interrupt is
> never fired up. I see the lines goes down (because START bit is putting
> low SDA before SCL), but the INTFLAG bits stay cleared forever. Even
> error bits in STATUS register (bus error, arbitration lost, any sort of
> timeout...) stay cleared and the BUSSTATE is IDLE. As soon as the short
> is removed, the state-machine goes on.
>
> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino
> Wire libraries[2]. In both cases, a timeout is implemented at the driver
> level.
>
> Even the datasheet says:
>
>   "Note:  Violating the protocol may cause the I2C to hang. If this
>   happens it is possible to recover from this state by a
>   software reset (CTRLA.SWRST='1')."

Not all manufacturers say this. The trick is to detect an error
condition, clear the error and if necessary reset the peripheral.

> I think the driver code should trust the hw, between them there's a
> contract, otherwise it's impossibile. For a UART driver, you write DATA
> register and wait an interrupt flag when a new data can be written in
> the register. If the interrupt nevers fire, the driver hangs forever.
> But I have never seen a UART driver that uses a timeout to recover from
> a hardware that could hang. And I used UARTs for many years now.
>
>
> Considering all these big issues when you want to write reliable code,
> I'm considering to wipe again the old and good bit banging technique.
> For I2C Single Master scenario, it IS very simple: put data low/high
> (three-state), put clock low/high. The only problem is to calibrate the
> clock frequency, but if you a free timer it will be simple too.

Nothing wrong with bit-banging.

> What is the drawback of bit banging? Maybe you write a few additional
> lines of code (you have to spit off 9 clock pulses by code), but I don't
> think much more than using a peripheral and protect it with a timeout.
> But you earn a code that is fully under your control and you know when
> the I2C transaction starts and you can be sure it will end, even when
> there are some hw issues on the board.

There is no drawback, apart from difficult to debug and a high MCU
utilisation. At least I2C is meant to be static, unlike SMB.

--
Mike Perkins
Video Solutions Ltd
www.videosolutions.ltd.uk

pozz

unread,
Nov 26, 2020, 3:01:38 AM11/26/20
to
Me too. It was only to just say the peripheral inside new MCUs are
compatible with I2C Multi-Master, so they are complex state-machine (see
arbitration lost).


> I2C was always intended to be used on a single PCB, without long wires
> attracting EM noise.
>
>> I want to write reliable code that not only works most of the time,
>> but that works ALL the time, in any situations (ok, 99%). So my first
>> test with I2C is making a temporary short between SCL and SDA. In this
>> case, I2C in SAMC21 (they named it SERCOM in I2C Master mode) hangs
>> forever. The manual says to write ADDR register to start putting the
>> address on the bus and wait for an interrupt flag when it ends. This
>> interrupt is never fired up. I see the lines goes down (because START
>> bit is putting low SDA before SCL), but the INTFLAG bits stay cleared
>> forever. Even error bits in STATUS register (bus error, arbitration
>> lost, any sort of timeout...) stay cleared and the BUSSTATE is IDLE.
>> As soon as the short is removed, the state-machine goes on.
>>
>> Maybe I'm wrong, so I studied Atmel Software Framework[1] and Arduino
>> Wire libraries[2]. In both cases, a timeout is implemented at the
>> driver level.
>>
>> Even the datasheet says:
>>
>> "Note:  Violating the protocol may cause the I2C to hang. If this
>> happens it is possible to recover from this state by a
>> software reset (CTRLA.SWRST='1')."
>
> Not all manufacturers say this. The trick is to detect an error
> condition, clear the error and if necessary reset the peripheral.
In my case, the peripheral doesn't trigger any error and you can exit
from this situation only with a timeout.

Mike Perkins

unread,
Nov 26, 2020, 9:34:20 AM11/26/20
to
Are you certain that the main event interrupt isn't being called
repeatedly? Because not all the conditions that cause the interrupt have
been serviced?

I'm more familiar with the ST ARM range and generally haven't seen a
hang condition that can't be assessed by looking at the status
registers. I obviously accept that other MCUs may behave differently.

I'm currently using FreeRTOS, and with a transmission complete
semaphore, so implementing a short timeout becomes an easy matter. Also
useful for repeating the I2C transmission in a failed NAK case.
0 new messages