Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Map iteration and modification

226 views
Skip to first unread message

DrPi

unread,
Dec 28, 2023, 8:53:21 AM12/28/23
to
Hi,

I need to delete nodes from a Hashed_Map. I don't know which nodes to
delete in advance. I have to iterate on the Map keys and delete the
nodes which fulfill a condition.
From the LRM I understand I can't delete nodes within a loop iterating
the Map nodes. That makes sense.
What's the recommended way of doing this ?
Iterate the Map and temporarily store the key nodes to be deleted then
delete the nodes from the key list ?

Nicolas

DrPi

unread,
Dec 28, 2023, 8:59:11 AM12/28/23
to
Le 28/12/2023 à 14:53, DrPi a écrit :
> Iterate the Map and temporarily store the key nodes to be deleted then
> delete the nodes from the key list ?

Not clear. Rephrasing it.
Using 2 steps by iterating the Map and temporarily store the keys of
nodes to be deleted then delete the Map nodes using the key list ?

Dmitry A. Kazakov

unread,
Dec 28, 2023, 11:06:23 AM12/28/23
to
[Disclaimer. I am not talking about the standard library]

Provided a sane implementation of map.

1. It is safe to loop over the map items in the *reverse* order of,
deleting whatever items.

2. It is safe to walk whatever set of map keys, deleting items of the map.

In both cases #1 positions, #2 keys are invariant to the operation of
deletion.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de

DrPi

unread,
Dec 28, 2023, 12:57:36 PM12/28/23
to
Le 28/12/2023 à 17:06, Dmitry A. Kazakov a écrit :
> On 2023-12-28 14:59, DrPi wrote:
>> Le 28/12/2023 à 14:53, DrPi a écrit :
>>> Iterate the Map and temporarily store the key nodes to be deleted
>>> then delete the nodes from the key list ?
>>
>> Not clear. Rephrasing it.
>> Using 2 steps by iterating the Map and temporarily store the keys of
>> nodes to be deleted then delete the Map nodes using the key list ?
>
> [Disclaimer. I am not talking about the standard library]

I'm using the standard library ;)

Randy Brukardt

unread,
Dec 28, 2023, 10:07:51 PM12/28/23
to
"DrPi" <3...@drpi.fr> wrote in message
news:umjuvc$9sp$2...@rasp.pasdenom.info...
If the keys are messy to save (as say with type String), it might be easier
to save the cursor(s) of the nodes to delete. You would probably want to use
a cursor iterator (that is, "in") to get the cursors. Code would be
something like (declarations of the Map and List not shown, nor is the
function Need_to_Delete which is obviously application specific, Save_List
is a list of cursors for My_Map, everything else is standard, not checked
for syntax errors):

Save_List.Empty; -- Clear list of saved cursors.
-- Find the nodes of My_Map that we don't need.
for C in My_Map.Iterate loop
if Need_to_Delete (My_Map.Element(C)) then
Save_List.Append (C);
-- else no need to do anything.
end if;
end loop;
-- Delete the cursors of the nodes we don't want anymore.
for C of Save_List loop
My_Map.Delete(C);
end loop;



Randy.


Randy Brukardt

unread,
Dec 28, 2023, 10:19:58 PM12/28/23
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:umk6ds$e9hc$1...@dont-email.me...
...
> Provided a sane implementation of map.
>
> 1. It is safe to loop over the map items in the *reverse* order of,
> deleting whatever items.

A sane implementation of a map does not have/require an ordering of keys. So
the idea of "reverse" or "forward" does not make sense for a general map.
(There are, of course, special cases where the keys have an order that
matters to the map; the standard ordered map is like that.) Assuming an
ordering is exposing the implementation unnecessarily.

You always complain about mixing implementation with interface, but you are
clearly doing that here. That technique really only works if the data
structure is implemented with an underlying array. If you have separately
allocated nodes, deletion might completely destroy a node that the iterator
is holding onto. Avoiding that takes significant efforts that sap
performance when you don't intend to modify the container you're iterating
(which is the usual case).

My longstanding objection to the entire concept of arrays is that they are
not a data structure, but rather a building block for making data
structures. One wants indexed sequences sometimes, cheap maps othertimes,
but arrays have all of the operations needed for both, along with other
capabilities not really related to data structures at all. It's way better
to declare what you need and get no more (visibly, at least). That makes it
way easier to swap implementations if that becomes necessary - you're not
stuck with a large array that really should be managed piecemeal.

Randy.


Dmitry A. Kazakov

unread,
Dec 29, 2023, 4:51:37 AM12/29/23
to
On 2023-12-29 04:20, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:umk6ds$e9hc$1...@dont-email.me...
> ...
>> Provided a sane implementation of map.
>>
>> 1. It is safe to loop over the map items in the *reverse* order of,
>> deleting whatever items.
>
> A sane implementation of a map does not have/require an ordering of keys.

Yes, but iterating a map requires ordering regardless properties of the
keys.

> So
> the idea of "reverse" or "forward" does not make sense for a general map.
> (There are, of course, special cases where the keys have an order that
> matters to the map; the standard ordered map is like that.) Assuming an
> ordering is exposing the implementation unnecessarily.

It always does sense *IF* enumeration (needed for iteration) is
provided. Enumeration of pairs (<key>, <value>) is not same as ordering
values by the keys.

> You always complain about mixing implementation with interface, but you are
> clearly doing that here. That technique really only works if the data
> structure is implemented with an underlying array. If you have separately
> allocated nodes, deletion might completely destroy a node that the iterator
> is holding onto. Avoiding that takes significant efforts that sap
> performance when you don't intend to modify the container you're iterating
> (which is the usual case).

No. First, it is two different interfaces. A view of a map as:

1. An ordered set of pairs (<key>, <value>)

2. A mapping <key> -> <value>

Second, the point is that both are array interfaces. The first has
position as the index, the second has the key as the index.

Both are invariant to removal a pair and any *sane* implementation must
be OK with that.

The problem is not whether you allocate pairs individually or not. The
insanity begins with things unrelated to the map:

1. OOP iterator object.

2. FP iteration function.

Both are bad ideas imposed by poor programming paradigms on
implementation of a clear mathematical concept. That comes with
constraints, assumptions and limitation array interface do not have.

for Index in reverse Map'Range loop
Map.Delete (Index);
end loop;

would always work. OOP/FP anti-patterns, who knows?

> My longstanding objection to the entire concept of arrays is that they are
> not a data structure, but rather a building block for making data
> structures.

Arrays have interface and implementation. The array interface is a
mapping key -> value, the most fundamental thing in programming. An
array implementation as a contiguous block of values indexed by a linear
function is a basic data structure that supports the interface.

> One wants indexed sequences sometimes, cheap maps othertimes,
> but arrays have all of the operations needed for both, along with other
> capabilities not really related to data structures at all.

Let me help (:-))

One wants array interface without a built-in array implementation.

> It's way better
> to declare what you need and get no more (visibly, at least). That makes it
> way easier to swap implementations if that becomes necessary - you're not
> stuck with a large array that really should be managed piecemeal.

Sure. The problem with Ada is that it does not separate array interface
from its built-in array implementation and does not separate record
interface and implementation either.

Both are mappings. BTW in many cases people could prefer record
interface of a map to array interface:

Map.Key

instead of

Map (Key)

Now, tell me that you have a longstanding objection to the entire
concept of records... (:-))

DrPi

unread,
Dec 29, 2023, 8:53:58 AM12/29/23
to
That's what I did but I saved the keys (String) instead of the cursors.
Does it make a difference ? Performance maybe ?

Nicolas
>
> Randy.
>
>

G.B.

unread,
Dec 29, 2023, 10:04:02 AM12/29/23
to
On 29.12.23 10:51, Dmitry A. Kazakov wrote:
> On 2023-12-29 04:20, Randy Brukardt wrote:
>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
>> news:umk6ds$e9hc$1...@dont-email.me...
>> ...
>>> Provided a sane implementation of map.
>>>
>>> 1. It is safe to loop over the map items in the *reverse* order of,
>>> deleting whatever items.
>>
>> A sane implementation of a map does not have/require an ordering of keys.
>
> Yes, but iterating a map requires ordering regardless properties of the keys.

Suppose that there is a way of orderly proceeding from one item to the next.
It is probably known to the implementation of map. Do single steps
guarantee transitivity, though, so that an algorithm can assume the
order to be invariable?

At the start of the algorithm, the assumption of order of items implies
an ordered sequence of all the keys. Someone might want to use this known
order for a cache of "index values". It might be the implementation
that does so.

Now some item is removed. The cache is no longer valid...

Insane? Or just tampering? (Randy Brukardt's example demonstrates
the mitigation using Cursor, I think.)


Maybe the bulk operations of some DBMS' programming
interfaces work just like this, for practical reasons.
Ada 202x' Ordered_Maps might want to add a feature ;-)

procedure Delete (Container : in out Map;
From : in out Cursor;
To : in out Cursor);


Dmitry A. Kazakov

unread,
Dec 29, 2023, 11:52:09 AM12/29/23
to
On 2023-12-29 16:03, G.B. wrote:
> On 29.12.23 10:51, Dmitry A. Kazakov wrote:
>> On 2023-12-29 04:20, Randy Brukardt wrote:
>>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
>>> news:umk6ds$e9hc$1...@dont-email.me...
>>> ...
>>>> Provided a sane implementation of map.
>>>>
>>>> 1. It is safe to loop over the map items in the *reverse* order of,
>>>> deleting whatever items.
>>>
>>> A sane implementation of a map does not have/require an ordering of
>>> keys.
>>
>> Yes, but iterating a map requires ordering regardless properties of
>> the keys.
>
> Suppose that there is a way of orderly proceeding from one item to the
> next.
> It is probably known to the implementation of map. Do single steps
> guarantee transitivity, though, so that an algorithm can assume the
> order to be invariable?

An insane implementation can expose random orders each time.

> At the start of the algorithm, the assumption of order of items implies
> an ordered sequence of all the keys.

You do not need ordered keys to enumerate pairs. For example, consider a
2D array. As a map it has keys (row, column) which are unordered.

> Someone might want to use this known
> order for a cache of "index values". It might be the implementation
> that does so.

If not exposed through an interface the order cannot be known. The
question is whether there must be such interface or not. In my view a
good container library must provide position->pair interface, no OOP's
cursors/iterators and no functional stuff like Foreach.

> Insane? Or just tampering? (Randy Brukardt's example demonstrates
> the mitigation using Cursor, I think.)

Unless removing element invalidates all cursors. Look, insanity has no
bounds. Cursors AKA pointers are as volatile as positions in certain
implementations. Consider a garbage collector running after removing a
pair and shuffling remaining pairs in memory.

> Maybe the bulk operations of some DBMS' programming
> interfaces work just like this, for practical reasons.
> Ada 202x' Ordered_Maps might want to add a feature ;-)
>
>      procedure Delete (Container : in out Map;
>                        From      : in out Cursor;
>                        To        : in out Cursor);

Here you assume that cursors are ordered and the order is preserved from
call to call. Even if From and To are stable the range From..To can
include random pairs in between.

Randy Brukardt

unread,
Dec 30, 2023, 1:28:27 AM12/30/23
to
"DrPi" <3...@drpi.fr> wrote in message
news:ummj1i$e64$1...@rasp.pasdenom.info...
... (Example eliminated)

> That's what I did but I saved the keys (String) instead of the cursors.
> Does it make a difference ? Performance maybe ?

It certainly will make a performance difference; whether that difference is
significant of course depends on the implementation. There's two parts to it
(one of which I thought of yesterday and the other which I forgot):
(1) The cost of storing keys vs. storing cursors. Cursors are going to be
implemented as small record types (cannonically, they are two pointers, one
to the enclosing container and one to the specific node/element). A key can
be most anything, and storing that can be more costly.
(2) The cost of looking up a key. A map is a set of nodes, and there
needs to be some operation to associate a key with the correct node. Those
operations take some time, of course: for a hashed map, the key has to be
hashed and then some sort of lookup performed. Whereas a cursor generally
contains an indication of the node, so the access is more direct.

For a lot of applications, this difference won't matter enough to be
significant. But I'd probably lean toward using cursors for this sort of job
as that would minimize performance problems down the line. (Of course, if
the container gets modified after you save the cursors, then they could
become dangling, which is a problem of it's own. If that's a possibility,
saving the keys is better.)

Randy.




Randy Brukardt

unread,
Dec 30, 2023, 2:20:41 AM12/30/23
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:umm4r5$ppag$1...@dont-email.me...
> On 2023-12-29 04:20, Randy Brukardt wrote:
>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
>> news:umk6ds$e9hc$1...@dont-email.me...
>> ...
>>> Provided a sane implementation of map.
>>>
>>> 1. It is safe to loop over the map items in the *reverse* order of,
>>> deleting whatever items.
>>
>> A sane implementation of a map does not have/require an ordering of keys.
>
> Yes, but iterating a map requires ordering regardless properties of the
> keys.

Only as far as there is an order implied by the order that things are
returned. That order doesn't have any meaning, and certainly there isn't any
such thing as "forward" or "reverse" to it. (Which was the original claim,
after all.) There is no "natural" order to the key/element pairs; they are
effectively unordered.

...

>> So
>> the idea of "reverse" or "forward" does not make sense for a general map.
>> (There are, of course, special cases where the keys have an order that
>> matters to the map; the standard ordered map is like that.) Assuming an
>> ordering is exposing the implementation unnecessarily.
>
> It always does sense *IF* enumeration (needed for iteration) is provided.
> Enumeration of pairs (<key>, <value>) is not same as ordering values by
> the keys.

True, but it doesn't imply any particular ordering. Certainly, no concept of
"forward" or "reverse" applies to such an ordering (nor any stability
requirement). Practically, you'll get the same order each time if the
container isn't modified, but if it is, all bets are off. (If the container
is changed by element addition or deletion, the index may get rebuilt [hash
table reconstructed if too full, tree-index rebalanced, etc.] and that can
change the iteration order dramatically.)

...
> No. First, it is two different interfaces. A view of a map as:
>
> 1. An ordered set of pairs (<key>, <value>)

This is not a map (in general). There is an *unordered* set of pairs. You
can retrieve them all, but the order that is done is meaningless and is an
artifact of the implementation. There's a reason that maps don't have
reverse iterators.

> 2. A mapping <key> -> <value>
>
> Second, the point is that both are array interfaces. The first has
> position as the index, the second has the key as the index.

"Position" is not a property of an (abstract) map. That's my complaint about
looking at everything as an array -- one starts thinking in terms of
properties that things don't have (or need).

> Both are invariant to removal a pair and any *sane* implementation must be
> OK with that.

The only sort of position that you could possibility talk about for a map is
the ordinal order that an iterator returns key/element pairs. But that
necessarily changes when you insert/delete a pair, as that pair will occur
at some (unspecified) point in the ordinal order. Otherwise, you won't have
the performance expected for key lookup in a map.

> The problem is not whether you allocate pairs individually or not. The
> insanity begins with things unrelated to the map:
>
> 1. OOP iterator object.
>
> 2. FP iteration function.
>
> Both are bad ideas imposed by poor programming paradigms on implementation
> of a clear mathematical concept. That comes with constraints, assumptions
> and limitation array interface do not have.

??? Abstractions are "poor ideas"? You have some problem with an iterator
interface as opposed to an array interface?? That make no sense at all given
your other positions.

> for Index in reverse Map'Range loop
> Map.Delete (Index);
> end loop;
>
> would always work.

It only works if you think of Map'Range as an iterator object. Otherwise,
you would have to impose an extra "position" interface on the map (or other
container), and at a substantial additional cost in time/space. Containers
in general don't have "positions", elements are unordered unless the
container imposes one.



...
> Arrays have interface and implementation. The array interface is a mapping
> key -> value, the most fundamental thing in programming.

That's only part of it. It also includes the idea of "position", including
calculated positions, the operations of concatenation and slicing, and (for
Ada at least) ordering operations. If the array interface was *only* a
mapping I would not object to it. Maps do not have a natural order, and
nothing should be depending on such order. There is no meaning to the third
pair in a map.

> An array implementation as a contiguous block of values indexed by a
> linear function is a basic data structure that supports the interface.

Right: the much more complex interface I note above. And that's the problem.
You don't even seem to realize all of the unnecessary baggage that arrays
carry with them.

...
> Sure. The problem with Ada is that it does not separate array interface
> from its built-in array implementation and does not separate record
> interface and implementation either.

Not arguing this. (Other than this is way down the list of problems with
Ada, there are many that are worse.)

...
> Now, tell me that you have a longstanding objection to the entire concept
> of records... (:-))

Nope. There has to be a hetrogenous grouping of values, and records do it as
well as anything else. I do agree that more abstraction would be nice.

The problem with arrays is that the mapping part is tied to many other
supposedly fundamental capabilities that aren't fundamental at all. Even
intellegent people such as yourself have been using arrays so long and so
primitively that you've gotten blinded to the fact that basic data
structures really have only a handful of operations, and the majority of the
"fundamental" capabilities aren't needed much of the time and certainly
should only be provided when needed.

Randy.



Dmitry A. Kazakov

unread,
Dec 30, 2023, 6:07:50 AM12/30/23
to
On 2023-12-30 08:21, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:umm4r5$ppag$1...@dont-email.me...
>> On 2023-12-29 04:20, Randy Brukardt wrote:
>>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
>>> news:umk6ds$e9hc$1...@dont-email.me...
>>> ...
>>>> Provided a sane implementation of map.
>>>>
>>>> 1. It is safe to loop over the map items in the *reverse* order of,
>>>> deleting whatever items.
>>>
>>> A sane implementation of a map does not have/require an ordering of keys.
>>
>> Yes, but iterating a map requires ordering regardless properties of the
>> keys.
>
> Only as far as there is an order implied by the order that things are
> returned. That order doesn't have any meaning, and certainly there isn't any
> such thing as "forward" or "reverse" to it. (Which was the original claim,
> after all.) There is no "natural" order to the key/element pairs; they are
> effectively unordered.

Iteration = order. It is the same thing. If you provide iteration of
pairs in the mapping by doing so you provide an order of.

>> It always does sense *IF* enumeration (needed for iteration) is provided.
>> Enumeration of pairs (<key>, <value>) is not same as ordering values by
>> the keys.
>
> True, but it doesn't imply any particular ordering. Certainly, no concept of
> "forward" or "reverse" applies to such an ordering (nor any stability
> requirement).

It does. You have a strict total order of pairs which guarantees
existence of previous and next pairs according to.

> Practically, you'll get the same order each time if the
> container isn't modified, but if it is, all bets are off. (If the container
> is changed by element addition or deletion, the index may get rebuilt [hash
> table reconstructed if too full, tree-index rebalanced, etc.] and that can
> change the iteration order dramatically.)

True, an operation may invalidate whatever invariants. It applies
equally to any orders, any cursors and pointers, any hidden states of
pending foreach operations. Sanity means which invariants the
implementation keeps.

I would argue that for general-case containers keeping
iterators/pointers and hidden states would be far more difficult than
keeping an order.

>> No. First, it is two different interfaces. A view of a map as:
>>
>> 1. An ordered set of pairs (<key>, <value>)
>
> This is not a map (in general). There is an *unordered* set of pairs. You
> can retrieve them all, but the order that is done is meaningless and is an
> artifact of the implementation. There's a reason that maps don't have
> reverse iterators.

Unless you provide iteration of the map. Most applications want
iteratable maps. Then a finite maps is still iteratable regardless best
efforts, though by crude means. E.g. once have an array (ordered set) of
keys, you are done.

>> 2. A mapping <key> -> <value>
>>
>> Second, the point is that both are array interfaces. The first has
>> position as the index, the second has the key as the index.
>
> "Position" is not a property of an (abstract) map. That's my complaint about
> looking at everything as an array -- one starts thinking in terms of
> properties that things don't have (or need).

Yes position is a property of enumeration.

>> Both are invariant to removal a pair and any *sane* implementation must be
>> OK with that.
>
> The only sort of position that you could possibility talk about for a map is
> the ordinal order that an iterator returns key/element pairs.

It is the reverse. Iterators is secondary to the order. Iterator walks
pairs in the order of pairs = in the order their positions.

> But that
> necessarily changes when you insert/delete a pair, as that pair will occur
> at some (unspecified) point in the ordinal order. Otherwise, you won't have
> the performance expected for key lookup in a map.

If you provide a random order, then yes. This is what an "insane"
implementation would do. A "sane" implementation would deploy orders
with reasonable properties. E.g. an obvious: k1/=k2/=k3 then (k1,v1) <
(k2,v2) is preserved when (k3,v3) is added or removed.

>> The problem is not whether you allocate pairs individually or not. The
>> insanity begins with things unrelated to the map:
>>
>> 1. OOP iterator object.
>>
>> 2. FP iteration function.
>>
>> Both are bad ideas imposed by poor programming paradigms on implementation
>> of a clear mathematical concept. That comes with constraints, assumptions
>> and limitation array interface do not have.
>
> ??? Abstractions are "poor ideas"?

Neither is an abstraction [as they are not entities of the problem
space, but programming techniques artifacts, [anti-]patterns]. Iterator
is an object of an unrelated type. Foreach is a stateful operation
unrelated to the pure map interface.

> You have some problem with an iterator
> interface as opposed to an array interface??

Yes, I am against pointers (referential semantics) in general. BTW, Ada
should have abstract pointer interface allowing the programmer to
implement iterators = fat pointers.

[ It would be fun with the pure unordered maps you suggested, the
implementation of the pointer (iterator) would keep an array or an
ordered set of keys... (:-)) ]

>> for Index in reverse Map'Range loop
>> Map.Delete (Index);
>> end loop;
>>
>> would always work.
>
> It only works if you think of Map'Range as an iterator object. Otherwise,
> you would have to impose an extra "position" interface on the map (or other
> container), and at a substantial additional cost in time/space. Containers
> in general don't have "positions", elements are unordered unless the
> container imposes one.

Yes, I would impose positions in all general case containers.

Specialized very large containers where an implementation without
cashing would become O(log n) rather than O(1) deploy other means of
traversal anyway.

>> Arrays have interface and implementation. The array interface is a mapping
>> key -> value, the most fundamental thing in programming.
>
> That's only part of it. It also includes the idea of "position",

Yes. Position in array is a mapping key/index <-> Natural.

> including calculated positions,

Yes. Natural numbers have numeric operations.

> the operations of concatenation and slicing,

That depends, but like with maps, it is expected. Maps as containers are
expected to provide "concatenations" of pairs (set-theoretic union) and
slicing (submaps). Because mathematically maps are sets of pairs and
sets can be manipulated in many ways. Ordering does not add much to the
interface.

> and (for
> Ada at least) ordering operations. If the array interface was *only* a
> mapping I would not object to it. Maps do not have a natural order, and
> nothing should be depending on such order. There is no meaning to the third
> pair in a map.

Yes, but those are not iteratable. We are talking about maps one can
iterate. That requires an order. The question is only about the forms of
exposure of that order in the interface. My objection is that iterators
and foreach are poor forms.

>> An array implementation as a contiguous block of values indexed by a
>> linear function is a basic data structure that supports the interface.
>
> Right: the much more complex interface I note above. And that's the problem.
> You don't even seem to realize all of the unnecessary baggage that arrays
> carry with them.

I don't see anything that is not already there. What are reasons for not
providing:

M (n) [ e.g. M (n).Key, M (n).Value ]
M (n1..n2) [ in mutable contexts too ]
M'First
M'Last
M1 & M2 [ M1 or M2 ]

They are all well-defined and useful operations.

> The problem with arrays is that the mapping part is tied to many other
> supposedly fundamental capabilities that aren't fundamental at all.

I disagree in the case of 1D arrays. There is of course interesting
issues with nD arrays but that is where multiple inheritance kicks in,
because in mathematics you can have "continuations" of concepts in more
than one direction. So 1D array might be both an nD array and something
else too.

> Even
> intellegent people such as yourself have been using arrays so long and so
> primitively that you've gotten blinded to the fact that basic data
> structures really have only a handful of operations, and the majority of the
> "fundamental" capabilities aren't needed much of the time and certainly
> should only be provided when needed.

That is true. But again, it is solved by inheritance already. You can
have an unordered map interface separately inherited by a general-case
map. You can split interfaces to refine what operations they include
from the implementation constraints point of view. So you can have a
very flexible mesh of implementations sharing some interfaces, but not
others. The best example is, of course, various types strings.

DrPi

unread,
Dec 31, 2023, 8:56:11 AM12/31/23
to
I modified my code to use cursors.
Thanks for your help.

Nicolas

G.B.

unread,
Jan 1, 2024, 2:27:55 PMJan 1
to
On 29.12.23 17:52, Dmitry A. Kazakov wrote:

>> Suppose that there is a way of orderly proceeding from one item to the next.
>> It is probably known to the implementation of map. Do single steps
>> guarantee transitivity, though, so that an algorithm can assume the
>> order to be invariable?
>
> An insane implementation can expose random orders each time.

An implementation order should then not be exposed, right?
What portable benefits would there be when another interface
is added to that of map, i.e., to Ada containers for general use?
Would it not be possible to get these benefits using a different
approach? I think the use case is clearly stated:

First, find Cursors in map =: C*.
Right after that, Delete from map all nodes referred to by C*.


> Unless removing element invalidates all cursors. Look, insanity has no bounds. Cursors AKA pointers are as volatile as positions in certain implementations. Consider a garbage collector running after removing a pair and shuffling remaining pairs in memory.
>
>> Maybe the bulk operations of some DBMS' programming
>> interfaces work just like this, for practical reasons.
>> Ada 202x' Ordered_Maps might want to add a feature ;-)
>>
>>       procedure Delete (Container : in out Map;
>>                         From      : in out Cursor;
>>                         To        : in out Cursor);
>
> Here you assume that cursors are ordered and the order is preserved from call to call. Even if From and To are stable the range From..To can include random pairs in between.

Yes, given the descriptions of Ordered_Maps, so long as there is no
tampering, a Cursor will respect an order. Likely the one that the
programmer has in mind.

For deleting, this thread has shown a loop that calls Delete
multiple times right after collecting the cursors.
And it is boilerplate text. Could Maps be improved for this use case?

[Bulk deletion] We do get bulk insertion in containers. Also,
A.18.2 already has bulk Delete operations. Similarly,
the Strings packages have them.

[No thread safety needed] If standard Ada maps are usually operated
by just one task, stability of Cursors is predictable.

Then, with or without automatic management of storage,
when My_Map is from an instance of Ordered_Map,

Start := In_13th_Floor (My_Map.Ceiling (13.0));
Finish := In_13th_Floor (My_Map.Floor (Fxd'Pred (14.0)));
My_Map.Delete (
From => Start,
Through => Finish);

where
function In_13th_Floor (C : Cursor) return Cursor
-- C, if the key at C is in [13.0, 14.0), No_Element otherwise

should therefore do the right thing, in that nothing
is left to chance.

Dmitry A. Kazakov

unread,
Jan 1, 2024, 3:55:16 PMJan 1
to
On 2024-01-01 20:27, G.B. wrote:
> On 29.12.23 17:52, Dmitry A. Kazakov wrote:
>
>>> Suppose that there is a way of orderly proceeding from one item to
>>> the next.
>>> It is probably known to the implementation of map. Do single steps
>>> guarantee transitivity, though, so that an algorithm can assume the
>>> order to be invariable?
>>
>> An insane implementation can expose random orders each time.
>
> An implementation order should then not be exposed, right?

IMO, an order should be exposed. Not necessarily the "implementation
order" whatever that might mean.

> What portable benefits would there be when another interface
> is added to that of map, i.e., to Ada containers for general use?

It is same benefit Ada arrays have over C's T* pointers and arithmetic
of. Cursor is merely a fat pointer.

> Would it not be possible to get these benefits using a different
> approach? I think the use case is clearly stated:
>
> First, find Cursors in map =: C*.
> Right after that, Delete from map all nodes referred to by C*.

Right. Find cursors, store cursors in another container, iterate that
container deleting elements of the first. Now, consider that the cursors
in the second container become invalid (dangling pointers). If you
wanted to delete them immediately from the second container, you would
return the square one! (:-))

With a positional access interface it would be just (pure Ada 95):

for Index in reverse 1..Number_Of_Elements (Container) loop
if Want_To_Delete (Get (Container (Index))) then
Delete (Container, Index);
end if;
end loop;

> For deleting, this thread has shown a loop that calls Delete
> multiple times right after collecting the cursors.
> And it is boilerplate text.  Could Maps be improved for this use case?

See above.

> [Bulk deletion] We do get bulk insertion in containers.  Also,
> A.18.2 already has bulk Delete operations.  Similarly,
> the Strings packages have them.
>
> [No thread safety needed] If standard Ada maps are usually operated
> by just one task, stability of Cursors is predictable.
>
> Then, with or without automatic management of storage,
> when My_Map is from an instance of Ordered_Map,
>
>    Start  := In_13th_Floor (My_Map.Ceiling (13.0));
>    Finish := In_13th_Floor (My_Map.Floor (Fxd'Pred (14.0)));
>    My_Map.Delete (
>       From    => Start,
>       Through => Finish);

The case is more general: delete pairs satisfying certain criterion.

G.B.

unread,
Jan 2, 2024, 11:40:10 AMJan 2
to
On 01.01.24 21:55, Dmitry A. Kazakov wrote:

>> Would it not be possible to get these benefits using a different
>> approach? I think the use case is clearly stated:
>>
>> First, find Cursors in map =: C*.
>> Right after that, Delete from map all nodes referred to by C*.
>
> Right. Find cursors, store cursors in another container, iterate that container deleting elements of the first. Now, consider that the cursors in the second container become invalid (dangling pointers). If you wanted to delete them immediately from the second container, you would return the square one! (:-))

OK, yet if indicating nodes in a container is designed to survive
any deletions, leaving nothing dangling i.e., doesn't this limit
the way in which implementations can store nodes?
Seems like every "pointer" (= (Container, Position)) needs to know
its element, and/or the container will have to equip its storage
management with a "vacuuming" algorithm accordingly.

Transactions seem more flexible (a locking flag in a sequential
algorithm?), a dedicated operation looks simpler than both.

Dmitry A. Kazakov

unread,
Jan 2, 2024, 3:57:38 PMJan 2
to
On 2024-01-02 17:40, G.B. wrote:
> On 01.01.24 21:55, Dmitry A. Kazakov wrote:
>
>>> Would it not be possible to get these benefits using a different
>>> approach? I think the use case is clearly stated:
>>>
>>> First, find Cursors in map =: C*.
>>> Right after that, Delete from map all nodes referred to by C*.
>>
>> Right. Find cursors, store cursors in another container, iterate that
>> container deleting elements of the first. Now, consider that the
>> cursors in the second container become invalid (dangling pointers). If
>> you wanted to delete them immediately from the second container, you
>> would return the square one! (:-))
>
> OK, yet if indicating nodes in a container is designed to survive
> any deletions, leaving nothing dangling i.e., doesn't this limit
> the way in which implementations can store nodes?

Unless the container keeps references to all pointers and corrects them
as necessary. Which basically defeats the only advantage of pointers.
They have O(1) complexity in large non-contiguous containers while
positions without caching are likely O(log(n)).

> Seems like every "pointer" (= (Container, Position)) needs to know
> its element, and/or the container will have to equip its storage
> management with a "vacuuming" algorithm accordingly.
>
> Transactions seem more flexible (a locking flag in a sequential
> algorithm?), a dedicated operation looks simpler than both.

For large and persistent containers. But that requires a lot of work for
the client and it is slow and resource consuming on the container side.
A general-purpose container must be as simple as tooth powder.

Randy Brukardt

unread,
Jan 2, 2024, 10:14:23 PMJan 2
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:umotm2$18lqm$1...@dont-email.me...
...
>> Only as far as there is an order implied by the order that things are
>> returned. That order doesn't have any meaning, and certainly there isn't
>> any
>> such thing as "forward" or "reverse" to it. (Which was the original
>> claim,
>> after all.) There is no "natural" order to the key/element pairs; they
>> are
>> effectively unordered.
>
> Iteration = order. It is the same thing. If you provide iteration of pairs
> in the mapping by doing so you provide an order of.

Certainly not. An iteration presents all of the elements in a container, but
there is no requirement that there is an order. Indeed, logically, all of
the elements are presented at the same time (and parallel iteration provides
an approximation of that).

If you try to enforce an order on things that don't require it, you end up
preventing useful parallelism (practically, at least, no one has succeeded
at providing useful parallelism to sequential code and people have been
trying for about 50 years -- they were trying when I was a university
student in the late 1970s).

>>> It always does sense *IF* enumeration (needed for iteration) is
>>> provided.
>>> Enumeration of pairs (<key>, <value>) is not same as ordering values by
>>> the keys.
>>
>> True, but it doesn't imply any particular ordering. Certainly, no concept
>> of
>> "forward" or "reverse" applies to such an ordering (nor any stability
>> requirement).
>
> It does. You have a strict total order of pairs which guarantees existence
> of previous and next pairs according to.

Again, this is unrelated. Iteration can usefully occur in unordered
containers (that is, "foreach"). Ordering is a separate concept, not always
needed (certainly not in basic structures like maps, sets, and bags).

...
> True, an operation may invalidate whatever invariants. It applies equally
> to any orders, any cursors and pointers, any hidden states of pending
> foreach operations. Sanity means which invariants the implementation
> keeps.

Ada requires that cursors continue to designate the same element through all
operations other than deletion of the element or movement to a different
container. Specific containers have additional invariants, but this is the
most general one. No other requirement is needed in many cases.

...
>> "Position" is not a property of an (abstract) map. That's my complaint
>> about
>> looking at everything as an array -- one starts thinking in terms of
>> properties that things don't have (or need).
>
> Yes position is a property of enumeration.

Surely not. This is a basis for my disagrement with you here. The only
requirement for enumeration is that all elements are produced. The order is
an artifact of doing it an inherently parallel operation sequentally. We
don't care about or depend on artifacts.

...
> It is the reverse. Iterators is secondary to the order. Iterator walks
> pairs in the order of pairs = in the order their positions.

Nope, this is completely wrong. Once you start with a bogus premise, of
course you will get all kinds of bogus conclusions!!

...
>> You have some problem with an iterator
>> interface as opposed to an array interface??
>
> Yes, I am against pointers (referential semantics) in general.

This is nonsense - virtually everything is referential semantics (other than
components). Array indexes are just a poor mans pointer (indeed, I learned
how to program in Fortran 66 initially, and way one built useful data
structures was to use array indexes as stand-ins for pointers). In A(1), 1
is a reference to the first component of A.

So long as you are using arrays, you are using referential semantics. The
only way to avoid it is to directly embed an object directly in an enclosing
object (as in a record), and that doesn't work for many problems.

...
> I don't see anything that is not already there. What are reasons for not
> providing:
>
> M (n) [ e.g. M (n).Key, M (n).Value ]
> M (n1..n2) [ in mutable contexts too ]
> M'First
> M'Last
> M1 & M2 [ M1 or M2 ]
>
> They are all well-defined and useful operations.

Performance. If all of these things are user-definable, then one has to use
subprogram calls to implement all of them. That can be very expensive,
particularly in the case of mutable operations (and mutable slices are the
worst of all). Moreover, since one would want the ability to have generic
and runtime parameters that only meet this interface (and the ability to
pass slices to subprograms that take unconstrained array parameters), you
would have to be able to pass all of these subprograms along with
parameters, even when you don't need them. That would make array operations
far more expensive than they are today.

I think it is much better to get rid of most of these operations as built-in
things and just let the programmer build their own operations as needed.
That keeps the cost confined to those who use them. Distributed overhead is
the worst kind, and slices in particular have a boatload of that overhead.

Randy.


Randy Brukardt

unread,
Jan 2, 2024, 10:21:20 PMJan 2
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:umv8rg$2b4on$1...@dont-email.me...
...
> It is same benefit Ada arrays have over C's T* pointers and arithmetic of.
> Cursor is merely a fat pointer.

A cursor is an abstract reference. It *might* be implemented with a pointer
or with an array index. Indeed, the bounded containers pretty much have to
be implemented with an underlying array.

It would be nice if there was some terminology for abstract references that
hadn't been stolen by some programming language. Terms like "pointer" and
"access" and "reference" all imply an implementation strategy. That's not
relevant most of the time, and many programming language design mistakes
follow from that. (Anonymous access types come to mind).

Randy.


moi

unread,
Jan 2, 2024, 11:06:03 PMJan 2
to
What about "currency", as used in DB systems?

--
Bill F.

Dmitry A. Kazakov

unread,
Jan 3, 2024, 5:05:01 AMJan 3
to
On 2024-01-03 04:15, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:umotm2$18lqm$1...@dont-email.me...
> ...
>>> Only as far as there is an order implied by the order that things are
>>> returned. That order doesn't have any meaning, and certainly there isn't
>>> any
>>> such thing as "forward" or "reverse" to it. (Which was the original
>>> claim,
>>> after all.) There is no "natural" order to the key/element pairs; they
>>> are
>>> effectively unordered.
>>
>> Iteration = order. It is the same thing. If you provide iteration of pairs
>> in the mapping by doing so you provide an order of.
>
> Certainly not. An iteration presents all of the elements in a container, but
> there is no requirement that there is an order.

The meaning of the word "iterate" is doing something (e.g. visiting an
element) again. That *is* an order.

> Indeed, logically, all of
> the elements are presented at the same time (and parallel iteration provides
> an approximation of that).

Parallel iteration changes nothing because involved tasks are enumerated
and thus ordered as well.

> If you try to enforce an order on things that don't require it, you end up
> preventing useful parallelism (practically, at least, no one has succeeded
> at providing useful parallelism to sequential code and people have been
> trying for about 50 years -- they were trying when I was a university
> student in the late 1970s).

Ordering things does not prevent parallelism. But storing cursors for
later is a mother of all Sequentialisms! (:-))

Whether a container elements can be effectively deleted in parallel is
an interesting but rather unpractical one. Nobody, literally nobody,
cares because any implementation would be many times slower than the
worst sequential one! (:-))

>>>> It always does sense *IF* enumeration (needed for iteration) is
>>>> provided.
>>>> Enumeration of pairs (<key>, <value>) is not same as ordering values by
>>>> the keys.
>>>
>>> True, but it doesn't imply any particular ordering. Certainly, no concept
>>> of
>>> "forward" or "reverse" applies to such an ordering (nor any stability
>>> requirement).
>>
>> It does. You have a strict total order of pairs which guarantees existence
>> of previous and next pairs according to.
>
> Again, this is unrelated. Iteration can usefully occur in unordered
> containers (that is, "foreach").

"An enumeration is a complete, ordered listing of all the items in a
collection."
-- Wikipedia

If "foreach" exposes an arbitrary ordering rather than some meaningful
(natural) one, that speaks for "insanity" but changes nothing.

> Ordering is a separate concept, not always
> needed (certainly not in basic structures like maps, sets, and bags).

Right. But no ordering means no iteration, no foreach etc. If I can
iterate, that I can create an ordered set of (counter, element) pairs. Done.

>> Yes position is a property of enumeration.
>
> Surely not. This is a basis for my disagrement with you here.

Then you are disagreeing with core mathematics... (:-))

> The only
> requirement for enumeration is that all elements are produced.

Produced in an order. Elements only produced" is merely an opaque set.
Enumeration of that set is ordering its elements.

> The order is
> an artifact of doing it an inherently parallel operation sequentally.

Yes, ordering is an ability to enumerate elements of a set. It is not an
artifact it is the sole semantics of.

[...]

>>> You have some problem with an iterator
>>> interface as opposed to an array interface??
>>
>> Yes, I am against pointers (referential semantics) in general.
>
> This is nonsense - virtually everything is referential semantics (other than
> components). Array indexes are just a poor mans pointer (indeed, I learned
> how to program in Fortran 66 initially, and way one built useful data
> structures was to use array indexes as stand-ins for pointers). In A(1), 1
> is a reference to the first component of A.
>
> So long as you are using arrays, you are using referential semantics. The
> only way to avoid it is to directly embed an object directly in an enclosing
> object (as in a record), and that doesn't work for many problems.

The key difference is that index does not refer any element. It is
container + index that do.

From the programming POV it is about avoiding hidden states when you
try to sweep the container part under the rug.

>> I don't see anything that is not already there. What are reasons for not
>> providing:
>>
>> M (n) [ e.g. M (n).Key, M (n).Value ]
>> M (n1..n2) [ in mutable contexts too ]
>> M'First
>> M'Last
>> M1 & M2 [ M1 or M2 ]
>>
>> They are all well-defined and useful operations.
>
> Performance.

Irrelevant so long it does not tamper implementations of other operations.

> If all of these things are user-definable, then one has to use
> subprogram calls to implement all of them. That can be very expensive,
> particularly in the case of mutable operations (and mutable slices are the
> worst of all).

But better, faster, safer when implemented ad-hoc by the programmer? See
how this thread started. An elementary task, solved in a most
inefficient, crude and unmaintainable way...

> Moreover, since one would want the ability to have generic
> and runtime parameters that only meet this interface (and the ability to
> pass slices to subprograms that take unconstrained array parameters), you
> would have to be able to pass all of these subprograms along with
> parameters, even when you don't need them. That would make array operations
> far more expensive than they are today.

Yes, and the language separates mutable and immutable stuff when using
containers already. You cannot get around this issue.

> I think it is much better to get rid of most of these operations as built-in
> things and just let the programmer build their own operations as needed.

Well, if you'd proposed throwing containers out the standard library I
would believe that you believe in that... (:-))

> That keeps the cost confined to those who use them. Distributed overhead is
> the worst kind, and slices in particular have a boatload of that overhead.

Usability always trumps performance. And again, looking at the standard
containers and all these *tagged* *intermediate* objects one needs in
order to do elementary things, I kind of in doubts... (:-))

Randy Brukardt

unread,
Jan 3, 2024, 11:06:40 PMJan 3
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:un3bg9$35mhv$1...@dont-email.me...
> On 2024-01-03 04:15, Randy Brukardt wrote:
>> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
>> news:umotm2$18lqm$1...@dont-email.me...
...
> The meaning of the word "iterate" is doing something (e.g. visiting an
> element) again. That *is* an order.

The order is an artifact. One logically visits all of the elements at the
same time (certainly in an unordered container like a general map).

>> Indeed, logically, all of
>> the elements are presented at the same time (and parallel iteration
>> provides
>> an approximation of that).
>
> Parallel iteration changes nothing because involved tasks are enumerated
> and thus ordered as well.

Nonsense. There is no interface in Ada to access logical threads (the ones
created by the parallel keyword).

>> If you try to enforce an order on things that don't require it, you end
>> up
>> preventing useful parallelism (practically, at least, no one has
>> succeeded
>> at providing useful parallelism to sequential code and people have been
>> trying for about 50 years -- they were trying when I was a university
>> student in the late 1970s).
>
> Ordering things does not prevent parallelism.

Yes it does, because it adds unnecessary constraints. It's those constraints
that make parallelizing normal sequenatial code hard. A parallelizer has to
guess which ones are fundamental to the code meaning and which ones are not.

...
>> Ordering is a separate concept, not always
>> needed (certainly not in basic structures like maps, sets, and bags).
>
> Right. But no ordering means no iteration, no foreach etc. If I can
> iterate, that I can create an ordered set of (counter, element) pairs.
> Done.
>
>>> Yes position is a property of enumeration.
>>
>> Surely not. This is a basis for my disagrement with you here.
>
> Then you are disagreeing with core mathematics... (:-))

You are adding an unnecessary property to the concept of iteration.
Iteration does not necessarily imply enumeration (it can, of course).
Iteration /= enumeration.

...
>> The order is
>> an artifact of doing it an inherently parallel operation sequentally.
>
> Yes, ordering is an ability to enumerate elements of a set. It is not an
> artifact it is the sole semantics of.

Iteration is not necessarily enumeration. It is applying an operation to all
elements, and doing that does not require an order. Some specific operations
might require an order, and clearly for those one needs to use a data
structure that inherently has an order.
...
>> So long as you are using arrays, you are using referential semantics. The
>> only way to avoid it is to directly embed an object directly in an
>> enclosing
>> object (as in a record), and that doesn't work for many problems.
>
> The key difference is that index does not refer any element. It is
> container + index that do.

That's not a "key difference". That exactly how one should use cursors,
especially in Ada 2022. The Ada containers do have cursor-only operations,
but those should be avoided since it is impossible to provide useful
contracts for those operations (the container is unknown, so the world can
be modified, which is bad for parallelism and understanding). Best to
consider those operations obsolete. (Note that I was *always* against the
cursor-only operations in the containers.)

So, using a cursor implies calling an operation that includes the container
of its parameter.

> From the programming POV it is about avoiding hidden states when you try
> to sweep the container part under the rug.

That's easily avoided -- don't use the obsolete operations. (And a style
tool like Jean-Pierre's can enforce that for you.)

>>> I don't see anything that is not already there. What are reasons for not
>>> providing:
>>>
>>> M (n) [ e.g. M (n).Key, M (n).Value ]
>>> M (n1..n2) [ in mutable contexts too ]
>>> M'First
>>> M'Last
>>> M1 & M2 [ M1 or M2 ]
>>>
>>> They are all well-defined and useful operations.
>>
>> Performance.
>
> Irrelevant so long it does not tamper implementations of other operations.

Exactly. These operations, especially slicing, have a huge impact on the
cost of parameter passing for arrays (whether or not they are used). And
that's a pretty fundamental operation.

>> If all of these things are user-definable, then one has to use
>> subprogram calls to implement all of them. That can be very expensive,
>> particularly in the case of mutable operations (and mutable slices are
>> the
>> worst of all).
>
> But better, faster, safer when implemented ad-hoc by the programmer?

As with all programming problems, you can only have two of the three. ;-)

If the underlying programming language is already better and safer, that
extends to user-written operations. (If it doesn't, it is a failure.)

...
...
>> I think it is much better to get rid of most of these operations as
>> built-in
>> things and just let the programmer build their own operations as needed.
>
> Well, if you'd proposed throwing containers out the standard library I
> would believe that you believe in that... (:-))

The standard library is not part of the programming language. It's a
necessary adjunct, but it could be completely replaced without changing the
language at all. It's necessary mainly so that basic operations are provided
in the same way by all implementations, but there is little requirement to
use it.

Specifically, the containers are separate from Ada. Plenty of programmers
use their own container libraries, with different performance and safety
profiles. That's expected and intended. There is no one-size-fits-all (or
even one-size-fits-many) container library.

>> That keeps the cost confined to those who use them. Distributed overhead
>> is
>> the worst kind, and slices in particular have a boatload of that
>> overhead.
>
> Usability always trumps performance.

That's the philosophy of languages like Python, not Ada. If you truly
believe this, then you shouldn't be using Ada at all, since it makes lots of
compromises to usability in order to get performance.

> And again, looking at the standard containers and all these *tagged*
> *intermediate* objects one needs in order to do elementary things, I kind
> of in doubts... (:-))

The standard containers were designed to make *safe* containers with decent
performance. As I noted, they're not a built-in part of the programming
language, and as such have no impact on the performance of the language
proper. One could easily replace them with an unsafe design to get maximum
performance -- but that would have to return pointers to elements, and
you've said you don't like referential semantics. So you would never use
those.

You also can avoid all of the "tagged objects" (really controlled objects)
by using function Element to get a copy of the element rather than some sort
of reference to it. That's preferred if it doesn't cost too much for your
application.

Randy.


Dmitry A. Kazakov

unread,
Jan 4, 2024, 6:28:07 AMJan 4
to
On 2024-01-04 05:07, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:un3bg9$35mhv$1...@dont-email.me...

[...]

>> Yes, ordering is an ability to enumerate elements of a set. It is not an
>> artifact it is the sole semantics of.
>
> Iteration is not necessarily enumeration. It is applying an operation to all
> elements, and doing that does not require an order.

That is not iteration, it is unordered listing, a totally useless thing
because the result is the same unordered set.

You could not implement it without prior ordering of the elements you
fed to the threads. If the threads picked up elements concurrently there
would be no way to do that without ordering elements into a taken / not
yet taken order. Hell, you cannot even get an element from a truly
unordered set, no way! If the programmer tried to make any use of the
listing he would again have to impose ordering when collecting results
per some shared object.

The unordered listing is a null operation without ordering.

>> The key difference is that index does not refer any element. It is
>> container + index that do.
>
> That's not a "key difference". That exactly how one should use cursors,
> especially in Ada 2022. The Ada containers do have cursor-only operations,
> but those should be avoided since it is impossible to provide useful
> contracts for those operations (the container is unknown, so the world can
> be modified, which is bad for parallelism and understanding). Best to
> consider those operations obsolete. (Note that I was *always* against the
> cursor-only operations in the containers.)
>
> So, using a cursor implies calling an operation that includes the container
> of its parameter.

OK. It is some immensely over-designed index operation, then! (:-)) So,
my initial question is back, why all that overhead? When you cannot do
elementary things like preserving your indices from a well-defined set
of upon deleting elements with indices outside that set?

>>>> I don't see anything that is not already there. What are reasons for not
>>>> providing:
>>>>
>>>> M (n) [ e.g. M (n).Key, M (n).Value ]
>>>> M (n1..n2) [ in mutable contexts too ]
>>>> M'First
>>>> M'Last
>>>> M1 & M2 [ M1 or M2 ]
>>>>
>>>> They are all well-defined and useful operations.
>>>
>>> Performance.
>>
>> Irrelevant so long it does not tamper implementations of other operations.
>
> Exactly. These operations, especially slicing, have a huge impact on the
> cost of parameter passing for arrays (whether or not they are used). And
> that's a pretty fundamental operation.

It is not slicing it is dynamically constrained arrays which are
required anyway. A general problem of language design is how to treat
statically known constraints effectively.

Ada arrays are pretty good to me. Note, I am saying that after years of
using Ada arrays for interfacing C! Yes, I would like having more
support for flattening arrays, but the mere fact that Ada can interface
C using *in-place* semantics invalidates your point.

> Specifically, the containers are separate from Ada.

Not really. Like STL with C++ it massively influenced the language
design motivating adding certain language features and shifting general
language paradigm in certain direction.

>> Usability always trumps performance.
>
> That's the philosophy of languages like Python, not Ada.

Ah, this is why Python is totally unusable? (:-))

Ada is usable and performant because of right abstractions it deploys.
If you notice performance problems then, maybe, just my guess, you are
using a wrong abstraction?

>> And again, looking at the standard containers and all these *tagged*
>> *intermediate* objects one needs in order to do elementary things, I kind
>> of in doubts... (:-))
>
> The standard containers were designed to make *safe* containers with decent
> performance.

Well, we always wish the best... (:-))

Randy Brukardt

unread,
Jan 4, 2024, 8:59:46 PMJan 4
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:un64o3$3krch$1...@dont-email.me...
...
>> Exactly. These operations, especially slicing, have a huge impact on the
>> cost of parameter passing for arrays (whether or not they are used). And
>> that's a pretty fundamental operation.
>
> It is not slicing it is dynamically constrained arrays which are required
> anyway. A general problem of language design is how to treat statically
> known constraints effectively.

No, it's the combination of slicing and passing arrays with unknown (at
compile-time) constraints that causes problems. And it only causes problems
if you want to separate the array interface and the array implementation
(which we both want to do). In such a case, you are passing arrays with
unknown constraints and implementation. Assignable slices don't work with
that, as they require a contiguous implementation of elements. You could
work around that in various ways (passing slices by copy-result, passing an
assignment routine along with the parameter), but all of them have
substantial performance impacts that would make traditional array
implementations much slower.

> Ada arrays are pretty good to me. Note, I am saying that after years of
> using Ada arrays for interfacing C! Yes, I would like having more support
> for flattening arrays, but the mere fact that Ada can interface C using
> *in-place* semantics invalidates your point.

Inferfacing is using the array implementation, not the array interface. Of
course it works great, as you note Ada commingles those in a way that makes
them inseparable. To separate them, you are going to have to lose something.
I chose slices and other array-specific operations as a built-in as that
something. YMMV.

...
>>> Usability always trumps performance.
>>
>> That's the philosophy of languages like Python, not Ada.
>
> Ah, this is why Python is totally unusable? (:-))

I would tend to argue that it is indeed the case that you get dubious
results when you put usability first. Ada puts
readability/understandability, maintainability, and consistency first (along
with performance). Those attributes tend to provide usability, but not at
the cost of making things less consistent or understandable.

I wrote an article on this topic a year and a half ago that I wanted to
publish on Ada-Auth.org. But I got enough pushback about not being "neutral"
that I never did so. (I don't think discussing why we don't do things some
other languages do is negative, but whatever.) I've put this on RR's blog at
http://www.rrsoftware.com/html/blog/consequences.html so it isn't lost.

Randy.


Simon Wright

unread,
Jan 5, 2024, 4:26:06 AMJan 5
to
"Randy Brukardt" <ra...@rrsoftware.com> writes:

> I wrote an article on this topic a year and a half ago that I wanted to
> publish on Ada-Auth.org. But I got enough pushback about not being "neutral"
> that I never did so. (I don't think discussing why we don't do things some
> other languages do is negative, but whatever.) I've put this on RR's blog at
> http://www.rrsoftware.com/html/blog/consequences.html so it isn't lost.

Thanks for this!

Dmitry A. Kazakov

unread,
Jan 5, 2024, 6:51:54 AMJan 5
to
On 2024-01-05 03:00, Randy Brukardt wrote:
> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:un64o3$3krch$1...@dont-email.me...
> ...
>>> Exactly. These operations, especially slicing, have a huge impact on the
>>> cost of parameter passing for arrays (whether or not they are used). And
>>> that's a pretty fundamental operation.
>>
>> It is not slicing it is dynamically constrained arrays which are required
>> anyway. A general problem of language design is how to treat statically
>> known constraints effectively.
>
> No, it's the combination of slicing and passing arrays with unknown (at
> compile-time) constraints that causes problems.

As well as passing non-null pointers, discriminated objects, class-wide
objects...

> And it only causes problems
> if you want to separate the array interface and the array implementation
> (which we both want to do). In such a case, you are passing arrays with
> unknown constraints and implementation. Assignable slices don't work with
> that, as they require a contiguous implementation of elements.

Only if the base type implementation is contiguous. A slice of a
non-contiguous container is non-contiguous. If you can deal with the
first you can do with the second.

The crucial point is removing static constraints from the instances
which Ada does on some occasions and not on others, especially for the
user-defined types.

> Inferfacing is using the array implementation, not the array interface. Of
> course it works great, as you note Ada commingles those in a way that makes
> them inseparable. To separate them, you are going to have to lose something.

Sure, but the point is that the loss should never happen when
constraints are static when the callee knows them. When the callee does
not, then a constraint must be passed to it.

>>>> Usability always trumps performance.
>>>
>>> That's the philosophy of languages like Python, not Ada.
>>
>> Ah, this is why Python is totally unusable? (:-))
>
> I would tend to argue that it is indeed the case that you get dubious
> results when you put usability first. Ada puts
> readability/understandability, maintainability, and consistency first (along
> with performance). Those attributes tend to provide usability, but not at
> the cost of making things less consistent or understandable.
>
> I wrote an article on this topic a year and a half ago that I wanted to
> publish on Ada-Auth.org. But I got enough pushback about not being "neutral"
> that I never did so. (I don't think discussing why we don't do things some
> other languages do is negative, but whatever.) I've put this on RR's blog at
> http://www.rrsoftware.com/html/blog/consequences.html so it isn't lost.

Thanks for posting this.

I disagree with what you wrote on several points:

1. Your premise was that use = writing. To me using includes all aspects
of software developing and maintenance process. Writing is only a small
part of it.

2. You argue for language regularity as if it were opposite to
usability. Again, it is pretty much obvious that a regular language is
easier to use in any possible sense.

3. Removing meaningless repetitions contributes to usability. But X := X
+ Y is only one instance where Ada required such repetition. There are
others. E.g.

if X in T'Class then
declare
XT : T'Class renames T'Class (X);

T'Class is repeated 3 times. A discussion point is whether a new name XT
could be avoided etc.

Introducing @ for a *single* purpose contradicts the principle of
regularity. I would rather have a regular syntax for most if not all
such instances.

Lawrence D'Oliveiro

unread,
Jan 5, 2024, 9:54:12 PMJan 5
to
On Thu, 4 Jan 2024 20:00:37 -0600, Randy Brukardt wrote:

> "Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
> news:un64o3$3krch$1...@dont-email.me...
>>>
>>>> Usability always trumps performance.
>>>
>>> That's the philosophy of languages like Python, not Ada.
>>
>> Ah, this is why Python is totally unusable? (:-))
>
> I would tend to argue that it is indeed the case that you get dubious
> results when you put usability first. ...
> http://www.rrsoftware.com/html/blog/consequences.html

Without reading that, I would never have understood “usability” to mean
“ease of writing”. I learned from early on in my programming career that
readability was more important than writability. So “using” a language
doesn’t end with writing the code: you then have to test and debug it--
basically lick it into shape--then maintain it afterwards.

Randy Brukardt

unread,
Jan 6, 2024, 2:02:01 AMJan 6
to
"Lawrence D'Oliveiro" <l...@nz.invalid> wrote in message
news:unafcg$bpv5$7...@dont-email.me...
Usability is of course not just ease-of-writing, but a lot of people tend to
co-mingle the two. For readability, too little information can be just as
bad as too much. For writability, the less you have to write, the better.

Randy.


Randy Brukardt

unread,
Jan 6, 2024, 2:24:43 AMJan 6
to
"Dmitry A. Kazakov" <mai...@dmitry-kazakov.de> wrote in message
news:un8qgm$50cc$1...@dont-email.me...
> On 2024-01-05 03:00, Randy Brukardt wrote:
...
> Thanks for posting this.
>
> I disagree with what you wrote on several points:
>
> 1. Your premise was that use = writing. To me using includes all aspects
> of software developing and maintenance process. Writing is only a small
> part of it.

Perhaps I didn't make it clear enough, but my premise was that many people
making suggestions for Ada confuse "ease-of-use" with "ease-of-writing". I
said "mischaracterized" for a reason (and I see that "mis" was missing from
the first use, so I just added that). "Ease-of-writing" is not a thing for
Ada, and it isn't considered while the other aspects are weighed. And as I
said in my last message, there is a difference in that writing more can help
understandability, but it never helps writing.

> 2. You argue for language regularity as if it were opposite to usability.
> Again, it is pretty much obvious that a regular language is easier to use
> in any possible sense.

But not necessarily easier to write, which was the primary topic I was
dealing with.

> 3. Removing meaningless repetitions contributes to usability. But X := X +
> Y is only one instance where Ada required such repetition. There are
> others. E.g.
>
> if X in T'Class then
> declare
> XT : T'Class renames T'Class (X);
>
> T'Class is repeated 3 times. A discussion point is whether a new name XT
> could be avoided etc.

Of course, this example violates OOP dogma, and some people would argue that
should be harder than following it. That's the same reason that Ada doesn't
have that many implicit conversions. In this particular example, I tend to
think the dogma is silly, but I don't off-hand see a way to avoid the
conversion being somewhere (few implicit conversions after all).

> Introducing @ for a *single* purpose contradicts the principle of
> regularity. I would rather have a regular syntax for most if not all such
> instances.

@ is regular in the sense that it is allowed anywhere in an expression. If
you tried to expand the use to other contexts, you would have to
differentiate them, which would almost certainly require some sort of
declaration. But doing that risks making the mechanism as wordy as what it
replaces (which obviously defeats the purpose).

We looked at a number of ideas like that, but they didn't seem to help
comprehension. In something like:
LHS:(X(Y)) := LHS + 1;
(where LHS is an arbitrary identifier), if the target name is fairly long,
it could be hard to find where the name for the target is given, and in any
case, it adds to the name space that the programmer has to remember when
reading the source expression. That didn't seem to add to readability as
much as the simple @ does.

In any case, these things are trade-offs, and certainly nothing is absolute.
But @ is certainly much more general than ":=+" would be, given that it
works with function calls and array indexing and attributes and user-defined
operations rather than just a single operator.

Randy.


Niklas Holsti

unread,
Jan 6, 2024, 3:14:11 AMJan 6
to
I feel that is too narrow a definition of writability (and perhaps you
did not intend it as a definition). Before one can start typing code,
one has to decide what to write -- which language constructs to use. A
systematically constructed, regular language like Ada makes that mental
effort easier, even if it results in more keystrokes; a plethora of
special-case syntaxes and abbreviation possibilities makes it harder.

Perhaps "writability" should even be taken to cover the whole process of
creating /correct/ code, and include all the necessary testing,
debugging and corrections until correct code is achieved. Here of course
Ada shines again, with so many coding errors caught at compile time.

Lawrence D'Oliveiro

unread,
Jan 6, 2024, 6:41:15 PMJan 6
to
On Sat, 6 Jan 2024 01:03:05 -0600, Randy Brukardt wrote:

> For writability, the less you have to write, the better.

I write code for readability, and I think avoiding repetition fits into
that as well. Thus, factoring repeated sequences into a common function/
class, and just putting calls to that in all the relevant places, is, I
find, generally a Good Thing.

Bugs seem to be measured per line of code. Therefore, fewer lines of code
means fewer bugs overall.

J-P. Rosen

unread,
Jan 6, 2024, 8:21:36 PMJan 6
to
Le 06/01/2024 à 03:03, Randy Brukardt a écrit :
> Usability is of course not just ease-of-writing, but a lot of people tend to
> co-mingle the two. For readability, too little information can be just as
> bad as too much. For writability, the less you have to write, the better.
>
Yes, I'm always surprised to see many languages (including Rust)
praising themselves of being "concise". Apart from saving some
keystrokes, I fail to see the benefit of being concise...

--
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
https://www.adalog.fr https://www.adacontrol.fr

Jeffrey R.Carter

unread,
Jan 7, 2024, 10:06:14 AMJan 7
to
On 2024-01-06 08:25, Randy Brukardt wrote:
>
> @ is regular in the sense that it is allowed anywhere in an expression. If
> you tried to expand the use to other contexts, you would have to
> differentiate them, which would almost certainly require some sort of
> declaration. But doing that risks making the mechanism as wordy as what it
> replaces (which obviously defeats the purpose).
>
> We looked at a number of ideas like that, but they didn't seem to help
> comprehension. In something like:
> LHS:(X(Y)) := LHS + 1;
> (where LHS is an arbitrary identifier), if the target name is fairly long,
> it could be hard to find where the name for the target is given, and in any
> case, it adds to the name space that the programmer has to remember when
> reading the source expression. That didn't seem to add to readability as
> much as the simple @ does.
>
> In any case, these things are trade-offs, and certainly nothing is absolute.
> But @ is certainly much more general than ":=+" would be, given that it
> works with function calls and array indexing and attributes and user-defined
> operations rather than just a single operator.

For the 9X and 0X revisions I suggested adding "when <condition>" to return and
raise statements, similar to its use on exit statements. This was rejected
because the language already has a way to accomplish this: if statements.

Given that one can do

declare
V : T renames Very_Long_Identifier;
begin
V := V - 23;
end;

it seems that @ should also have been rejected. Probably more so, since @ is
completely new syntax rather than reusing existing syntax on some additional
statements. What is the justification of accepting @ while still rejecting the
other?

--
Jeff Carter
"If I could find a sheriff who so offends the citizens of Rock
Ridge that his very appearance would drive them out of town ...
but where would I find such a man? Why am I asking you?"
Blazing Saddles
37

Randy Brukardt

unread,
Jan 8, 2024, 11:46:11 PMJan 8
to
"Jeffrey R.Carter" <spam.jrc...@spam.acm.org.not> wrote in message
news:uneel2$12ufr$1...@dont-email.me...
...
> For the 9X and 0X revisions I suggested adding "when <condition>" to
> return and raise statements, similar to its use on exit statements. This
> was rejected because the language already has a way to accomplish this: if
> statements.

I don't recall ever seriously considering this (might just my memory getting
old). I suspect that didn't get rejected so much as not making the cut as
important enough. We do try to limit the size of what gets added, not just
adding everyone's favorite feature.

I'd guess that "raise Foo when Something" would get rejected now as it would
be confusing with "raise Foo with Something" which means something very
different. (At least the types of "Something" are different in these two.)
OTOH, we added "when condition" to loops (which I thought was unnecessary,
but I lost that), so argubly it would be consistent to add it to other
statements and expressions as well. Perhaps you should raise it again on the
Github.

Randy.

Lawrence D'Oliveiro

unread,
Jan 9, 2024, 12:56:37 AMJan 9
to
On Mon, 8 Jan 2024 22:46:59 -0600, Randy Brukardt wrote:

> OTOH, we added "when condition" to loops (which I thought
> was unnecessary, but I lost that) ...

I can see that conditional exits are a very common case, and because Ada
requires “end if” on if-statements, they wanted to shorten the common
case, hence exit-when.

Not sure if conditional raises are that common. If my Python experience is
any guide, I don’t do that much.

Jeffrey R.Carter

unread,
Jan 9, 2024, 4:43:41 AMJan 9
to
On 2024-01-09 05:46, Randy Brukardt wrote:
> "Jeffrey R.Carter" <spam.jrc...@spam.acm.org.not> wrote in message
> news:uneel2$12ufr$1...@dont-email.me...
>
> I don't recall ever seriously considering this (might just my memory getting
> old). I suspect that didn't get rejected so much as not making the cut as
> important enough.

I don't consider special syntax to shorten names in assignment statements
important at all. We have renames for that, and it is a more general mechanism,
applying to more than just assignments.

--
Jeff Carter
"[I]t is foolish to polish a program beyond the
point of diminishing returns, but most programmers
do too little revision; they are satisfied too
early."
Elements of Programming Style
189

Bill Findlay

unread,
Jan 9, 2024, 10:19:56 AMJan 9
to
On 7 Jan 2024, J-P. Rosen wrote
(in article <uncuas$qe2g$1...@dont-email.me>):

> Le 06/01/2024 à 03:03, Randy Brukardt a écrit:
> > Usability is of course not just ease-of-writing, but a lot of people tend to
> > co-mingle the two. For readability, too little information can be just as
> > bad as too much. For writability, the less you have to write, the better.
> Yes, I'm always surprised to see many languages (including Rust)
> praising themselves of being "concise". Apart from saving some
> keystrokes, I fail to see the benefit of being concise...

Agreed. However, it is a bit of a totem in the FP cult.

--
Bill Findlay

Lawrence D'Oliveiro

unread,
Jan 9, 2024, 3:30:39 PMJan 9
to
On Sat, 6 Jan 2024 21:21:30 -0400, J-P. Rosen wrote:

> Yes, I'm always surprised to see many languages (including Rust)
> praising themselves of being "concise". Apart from saving some
> keystrokes, I fail to see the benefit of being concise...

How about this for an example. I created a Python wrapper around the Cairo
graphics library <https://www.cairographics.org/>. There are already other
Python wrappers, which are little more than transliterations of the C API.
I wanted to go one step further. So whereas in C you might write

x1 = - scope_radius * sin(trace_width_angle);
y1 = scope_radius * cos(trace_width_angle);
cairo_line_to(ctx, x1, y1);

my Python wrapper reduces this down to

ctx.line_to(Vector(0, scope_radius).rotate(trace_width_angle))

How’s that for “concise”?
0 new messages