Or, from the POV of a language designer who actually likes C:
Makes a language, could try to pass it off as a "C replacement";
Doesn't really bother, C still does C things pretty well (and there are
still non-zero cost of using my language in places where C works).
I have also written a C compiler (and a more experimental limited subset
of C++), so these are also options (for use-cases where I am using a
custom compiler).
A downside with doing a "C replacement" is that many of the "flaws" of C
will still exist if one makes a language which can address the same
use-cases:
If one tries to side-step them (by omitting things), then the language
is painful to use;
If one tries to provide alternatives that do similar, but are
"safer"/..., then they come with overhead;
One can't do a "simple" language and aim for this use-case, as by the
time it is "actually usable", then complexity is already on-par with C;
...
While dynamic / "high level" languages can address some use-cases pretty
well, in the context of a "C alternative" they are generally unusable.
Similar generally goes for languages with more wildly different language
designs, ...
Many people add features which require runtime support, such as garbage
collection, which makes it unsuitable for many use-cases (ex: hard
real-time and smaller microcontrollers); and even on big, fast,
soft-real-time uses (ex: typical desktop PC) they are not ideal (the GC
still almost invariably sucks in one area or another).
Things like exceptions are "use with caution"; need to keep ABI cost
minimal. The best scheme I am (currently) aware of is doing it similar
to the Win64 ABI (using using instruction ranges and using the epilog
sequence for unwinding). There is still a cost for storing a table with
per-function pointers (ex: 8 or 16B or so per-function).
Things like bounds-checked arrays are doable, but typically add the cost
of requiring the use of "fat pointers" or similar (either passed in GPRs
or indirectly via reference or the usual "pass struct by value"
mechanism for the ABI, *).
I have used this in my own language (which uses an arrays partway
between the C and Java/C# styles), but it still carries a bit of
overhead vs C-style raw pointers.
*: Mostly leaning against going into "pass struct by value" mechanisms
here (this varies a fair bit between architectures and ABIs).
By the time a type-system is capable enough to do "all the usual C
stuff", it is already about as complicated as what is needed for the C
typesystem (or one pays in other ways).
Though, one can still simplify things partly vs the surface-level
language (this also applies to C), and divide the types into major types
and sub-types.
For Example: ILFDAX
* I: 32-bit int and sub-types.
* L: 64-bit long and sub-types.
* F: 32-bit float and sub-types.
* D: 64-bit double and sub-types.
* A: pointers / arrays / "struct by ref" / ...
* X: 128-bit types (int128, float128, vec4f, small structs, ...).
So, for example, everything which can fit into an 'int' or 'unsigned
int' could fit into 'I'; everything which is internally treated as a
64-bit integer type goes into 'L', ...
This is very similar to the model used internally by Java, but can be
made to accommodate C.
The main remaining "hair" area is that C treats "int" and "unsigned int"
a bit differently in some areas, which may require dealing specially
with unsigned cases in some cases. But, likewise, this is true for
pretty much any language which has unsigned integer types.
OO / object-system, generally needed "de-facto" for a language to be
taken seriously, but even in the simplest cases add complexity over "not
having an object system".
Vs the C++ object system, it is possible to simplify things somewhat
though. Most obvious: Doing like Java and C# and dropping Multiple
Inheritance.
Similarly, one can do like C# and split 'struct' and 'class' into two
functionally distinct entities. This can reduce some implementation hair
vs trying to make both exist.
My case, I still have some complexity (of class-like structs) mostly as
it is still needed to support the C++ subset (though my subset did drop
MI). ( FWIW: Both my custom language and the C++ subset share the same
underlying compiler in this case; and aren't too far off from being
effectively re-skins of the other, *. Though another separate VM based
implementation of this language also exists. )
Similarly, in my case it is also possible to use the "natively compiled"
version in VM style use cases mostly by compiling to a RISC style ISA
and then using partial emulation (such an ISA can still be useful while
also still being relatively simple; and can protect the host's memory
via an address-space sandbox, or possibly with selective use of a
"shared memory" mechanism, ...).
*: Partly as a consequence, many things in one can be expressed in the
other "just with slightly different syntax", and while not quite as
copy/paste compatible with C as C++ is (due to mine using a more
Java-ish declaration syntax), cross-language interfacing is otherwise
relatively straightforward (same underlying ABI). This doesn't currently
target x86 based systems, but could probably do so if needed.
All it really leaves is C having (pros/cons) exposed raw pointers, and
some amount of ugly-named extension keywords (which can be remapped via
typedefs or defines).
And, my case, I also have some features from my other language exposed
in C land via similarly ugly extensions, but generally not used much as
reliance on language extensions is not good for code portability (apart
from cases where the extension is implemented in a way where a plain C
fallback can be provided).
Similarly, between the custom language and a C++ subset, the latter
(also) has the advantage that code written in it will still work in a
normal C++ compiler (and still has the drawback of it being C++ rather
than C).
Combined, a lot of this doesn't really make for a particularly strong
case for trying to replace C.