--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to eiffel-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/eiffel-users.
For more options, visit https://groups.google.com/d/optout.
Hi Ian,
Thanks for the response. You state that “marking objects” ought to be implemented as some form of meta-data rather than directly in the code. Why? What makes the metadata solution superior to a code notation and a persistence-aware compiler?
Hello Bernd,
The need is real-world and hoping the academic is close enough for real-world implementation. J
“Data lives longer than code”—indeed it does. We are in the throes of moving data from “legacy code” to a new Eiffel system. What you’re really saying is that the data has more business value than the code that produced it. True that!
Code independent data model: Of course. Data is data at the end of the day—regardless of who or what consumes it.
What I am not talking about is Eiffel storables, serialized Java, or Python ‘pickle’ files. This is why I mention ABEL, because how the objects are persisted is of secondary importance. That is a choice made for other reasons. What I am referring to are programmer selected objects that persist without the additional labor of having to write a persistence layer of code.
So—whether the ultimate persistence mechanism is SQL, XML, JSON, YAML, or others like them, the result is the same: A safe and sane system of object-persistence that removes the burden of programmer labor, similar to how SCOOP removes the need to write threading code or that the Eiffel compiler removes my need to write complex C (cross-platform and more).
Nevertheless—your point is well taken! J
From: eiffel...@googlegroups.com [mailto:eiffel...@googlegroups.com]
On Behalf Of Bernd Schoeller
Sent: Tuesday, January 12, 2016 1:28 AM
To: eiffel...@googlegroups.com
Subject: Re: [eiffel-users] General Question: Object Persistence Mechanism
Hi -
just out of curiosity: is this an academic or a 'real-world' problem you are trying to solve.
If it is a 'real-world' problem you are trying to solve, the answer to object-persistence mechanisms is: just don't do it.
In reality, data is much longer lived than code. While a good piece of software might live for one or two decades before some serious rewriting is required, data is carried over from one system to the next and is an asset to the business.
The object-oriented blending of code and data is its greatest strength and its greatest weakness at the same time. But the problem also exists with non-OO languages.
The solution is to use programming language dependent data models for all transient information, but to select a code independent data model for your persistent data (examples: SQL tables, XML or JSON, ASN-1, plain ASCII files, CSV ...).
Also: you are trying to deal in a multi-language environment. The object model is always very different between languages and creates long term inconsistencies. Looks at the major problems Eiffel had trying to map its object model to the .NET one (they did
a fantastic job, but it still is a complex beast).
I am talking from year long experience in the industry with object-oriented systems, where after a few years the persisted object files always became a liability and people tried to move away from them (Eiffel storables, serialized Java, Python 'pickle' files). But that was always an extremely painful process, because the data while was still needed the code was not.
Bernd
On 11/01/16 14:32, lrix wrote:
On 13 Jan 2016, at 18:04, Bernd Schoeller <bernd.s...@gmail.com> wrote:The 'academic' vs 'real-world' comparison was concerned with the life-time and maintainability of a solution. Academia rarely thinks beyond the next paper or PhD thesis. It is all very much "fire and forget".
Best Motto when it comes to software design in the industry: Always code as if the next person to pick-up your project is a mass-murdering serial killer.
will form the sequence of figures which is the decimal of the real number which is being computed. The others are just rough notes to "assist the memory ". It will only be these rough notes which will be liable to erasure.” ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM
Thus a system would automatically know what is important and what was just rough calculation. As in transaction processing, once a unit of work is complete, it is automatically made permanent (the D in ACID).
Ian
If the data is something the overarching system cares about, I would think that your assertion is true: Once the routine clears the postconditions, the relevant data ought to persisted.
In response to a point Bernd made about Data v. Code: It is not entirely true that data exists apart from the code that created it. This is especially true for data that has been dynamically derived and not static (like user input). A derived datum is absolutely tied to the version of the algorithm that produced it. Saving the data apart from the producing algorithm is to partially or fully lose the capacity to understand the data. In a best-of-all-worlds scenario, it seems to me that one would store: 1) Data Result(s) 2) Production algorithm names and versions. While this metadata would not be the algorithm itself, it would (at least) present clues to the data consumers about where the data came from and roughly how it was derived. So, I think it is too simplistic to claim data lives on without code as a black-and-white matter.
Good morning Thomas/Ian!
J
What an excellent discussion! Thanks for taking the time to write in such detail. I am learning and grateful for it.
Thomas—I can see the very good sense of having a specification of the data in such a way that a consumer of it can successfully reason about where it came from, what its purpose is, and (perhaps) how it was derived or came to exist. I also see that one would be overwhelmed to know the actual code used to derive the data and some abbreviated specification is preferable (e.g. “just enough specification”). What I am now curious about is how you all think about that in terms of algorithmic variants—that is—I have two numbers in a persisted (I like Ian’s term “durable”) database. Each number is the same field in two different records. The first is calculated by variant 1 of an algorithm and the second by variant or version 2. Given this, does the specification contain documentation about both variants? Moreover, are the data elements somehow marked with their version such that the reader of the data knows which algorithm variant was used to compute it?
Again—thank you gentlemen. This discussion is very helpful.
From: eiffel...@googlegroups.com [mailto:eiffel...@googlegroups.com]
On Behalf Of Thomas Beale
Sent: Thursday, January 14, 2016 5:21 AM
To: Eiffel Users
Subject: Re: [eiffel-users] General Question: Object Persistence Mechanism
Hi Ian,
--
On Larry’s point of computable data, this should not be stored in durable storage. In fact, most database design is involved with just discerning what fundamental data is that cannot be derived from other data. Even fundamental relationships of data are separate from derived relationships, so that such derived relationships can still be made in an ad-hoc manner (normalisation). The practical consideration here is that some computations are expensive and you might want to store the results of such a computation for later - but such data should clearly be distinguished from fundamental (non-derivable) data. This is cached data, and variables are just a form of cache and add no power to the computation process. Variables are just part of that junk that I talked about that don’t add to fundamental understanding (functional programming).
On Larry’s point of computable data, this should not be stored in durable storage.In fact, most database design is involved with just discerning what fundamental data is that cannot be derived from other data.
Larry Rix mentioned layering, which, I think, is something quite fundamental and perhaps is a relevant part of this discussion.
We met it in the ISO seven layer model (Physical, electrical, digital, etc.), and it crops up in many other places as well. So, perhaps one could speak of the “durable data” layer in this discussion. Then, routines that write/encode the durable data together with routines that read/decode/interpret it exist in a layer above the durable data layer. This layer implements a process of communication. Perhaps the protocol of this communication is worth considering and exploring in its own right. Are there higher layers, perhaps? Could this layer actually be more than one layer?
Many years ago, I wrote multiple process real-time software for microprocessors. In that case, there were four layers: hardware, device drivers, processes and process communication. Rather than designing a blob of software called an application, I split it into several processes, one for each source of activity that needed to be handled. Separating the processes from process communication made designs robust and easy.
--
Peter Horan Faculty of Science, Engineering
pe...@deakin.edu.au and Built Environment
+61-3-5221 1234 (Voice) Deakin University
+61-4-0831 2116 (Mobile) Geelong, Victoria 3217, AUSTRALIA
-- The Eiffel guarantee: From specification to implementation
-- (http://www.objenv.com/cetus/oo_eiffel.html)
I wrote of the “durable data layer” that:
“This layer implements a process of communication. Perhaps the protocol of this communication is worth considering and exploring in its own right.”
On thinking further in the context of databases etc., the “communication” achieves the concealment of the fact that the objects whose states are saved as persistent data and which are reconstructed from that data when later required, may in fact, not be in memory at all in that interval. But as far as some higher layer is concerned, holding clients of these objects, they continued to exist.
Application layer: ------------------------ client ------------------------------
| |
v v
Communication layer: object (Garbage collected) (Object recreated) object
| |
v v
----------- durable data (continues to exist) -----------
-------------------------------------------------------------------------> time
--
Peter Horan Faculty of Science, Engineering
pe...@deakin.edu.au and Built Environment
+61-3-5221 1234 (Voice) Deakin University
+61-4-0831 2116 (Mobile) Geelong, Victoria 3217, AUSTRALIA
-- The Eiffel guarantee: From specification to implementation
-- (http://www.objenv.com/cetus/oo_eiffel.html)
This relates to my view (17 Jan 16) that the durable data is a lower layer than the communication layer. The interpretation of data may change. That is, the meaning conveyed at the communication level changes. So, how should changes in meaning be managed?
From: eiffel...@googlegroups.com [mailto:eiffel...@googlegroups.com]
On Behalf Of lrix
Sent: Saturday, 23 January 2016 07:15
To: Eiffel Users
Subject: Re: [eiffel-users] General Question: Object Persistence Mechanism
Precisely correct—that is the point exactly.
--
You received this message because you are subscribed to the Google Groups "Eiffel Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
eiffel-users...@googlegroups.com.
Visit this group at https://groups.google.com/group/eiffel-users.
For more options, visit https://groups.google.com/d/optout.
On Tue 26/01/2016 05:38, Larry wrote:
“The very nature of data is that it can exist without any surrounding context. It is the context that provides semantics. … I still have no means by which to know precisely how that price was computed.”
Data may exist by itself, but without context (the communication layer I referred to), the data is meaningless.
Larry >> “Ultimately, we only need to know enough information to work with the data in a reasonable manner—regardless of the programming language or system used to create the data.”
Do we really need to know how some data was computed? I think the quote implies that we do not. No algorithm is needed, but “communication about the data”, that is, its context, is necessary.
I am introducing the concept of layers to the discussion, because it may be a useful point of view, and may also guide design. For example, what is the “Least Context” needed to interpret data and how should it be made available (communicated, encoded)?
--
Peter Horan Faculty of Science, Engineering
pe...@deakin.edu.au and Built Environment
+61-3-5221 1234 (Voice) Deakin University
+61-4-0831 2116 (Mobile) Geelong, Victoria 3217, AUSTRALIA
-- The Eiffel guarantee: From specification to implementation
-- (http://www.objenv.com/cetus/oo_eiffel.html)