On 6/9/2017 7:14 PM, George Neuner wrote:
> On Fri, 9 Jun 2017 00:06:05 -0700, Don Y <blocked...@foo.invalid>
> wrote:
>
>> On 6/8/2017 3:38 AM, George Neuner wrote:
>>>
>>> ... adopt a throw-away mentality: replace rather than maintain.
>>>
>>> That basically is the idea behind the whole agile/devops/SaaS
>>> movement: if it doesn't work today, no problem - there will be a new
>>> release tomorrow [or sooner].
>>
>> I think those are just enablers for PHB's who are afraid to THINK
>> about what they want (in a product/design) and, instead, want to be shown
>> what they DON'T want.
>
> IME most people [read "clients"] don't really know what they want
> until they see what they don't want.
I've typically only found that to be the case when clients (often
using "their own" money) can't decide *if* they want to enter a
particular market. They want to see something to gauge their own
reaction to it: is it an exciting product or just another warmed over
stale idea.
I used to make wooden mockups of devices just to "talk around".
Then foamcore. Then, just 3D CAD sketches.
But, how things work was always conveyed in prose. No need to see
the power light illuminate when the power switch was toggled. If
you can't imagine how a user will interact with a device, then
you shouldn't be developing that device!
The only "expensive" dog-and-pony's were cases where the underlying
technology was unproven. Typically mechanisms that weren't known
to behave as "envisioned" without some sort of reassurances (far from a
clinical *proof*). I don't have an ME background so can never vouch
for mechanical designs; if the client needs reassurance, the ME has
to provide it *or* invest in building a real mechanism (which often
just "looks pretty" without any associated driving electronics)
> Most people go into a software development effort with a reasonable
> idea of what it should do ... subject to revision if they are allowed
> to think about it ... but absolutely no idea what it should look like
> until they see - and reject - several demos.
That's just a failure of imagination. A good spec (or manual) should
allow a developer or potential user to imagine actually using the
device before anything has been reified. Its expensive building
space shuttles just to figure out what it should look like! :>
> The entire field of "Requirements Analysis" would not exist if people
> knew what they wanted up front and could articulate it to the
> developer.
IMO, the problem with the agile approach is that there is too much
temptation to cling to whatever you've already implemented. And, if
you've not thoroughly specified its behavior and characterized its
operation, you've got a black box with unknown contents -- that you
will now convince yourself does what it "should" (without having
designed it with knowledge of that "should").
So, you end up on the wrong initial trajectory and don't discover
the problem until you've baked lots of "compensations" into the
design.
[The hardest thing to do is convince yourself to start over]
>>> For almost any non-system application, you can do without (explicit
>>> source level) pointer arithmetic. But pointers and the address
>>> operator are fundamental to function argument passing and returning
>>> values (note: not "value return"), and it's effectively impossible to
>>> program in C without using them.
>>
>> But, if you'd a formal education in CS, it would be trivial to
>> semantically map the mechanisms to value and reference concepts.
>> And, thinking of "reference" in terms of an indication of WHERE
>> it is! etc.
>
> But only a small fraction of "developers" have any formal CS, CE, or
> CSE education. In general, the best you can expect is that some of
> them may have a certificate from a programming course.
You've said that in the past, but I can't wrap my head around it.
It's like claiming very few doctors have taken any BIOLOGY courses!
Or, that a baker doesn't understand the basic chemistries involved.
>> Similarly, many of the "inconsistencies" (to noobs) in the language
>> could easily be explained with "common sense":
>> - why aren't strings/arrays passed by value? (think about how
>> ANYTHING is passed by value; the answer should then be obvious)
>> - the whole notion of references being IN/OUT's
>> - gee, const can ensure an IN can't be used as an OUT!
>> etc.
>
> That's true ... but then you get perfectly reasonable questions like
> "why aren't parameters marked as IN or OUT?", and have to dance around
> the fact that the developers of the language were techno-snobs who
> didn't expect that clueless people ever would be trying to use it.
That's a shortcoming of the language's syntax. But, doesn't prevent
you from annotating the parameters as such.
My IDL requires formal specification because it has to know how to marshal
and unmarshal on each end.
> Or "how do I ensure that an OUT can't be used as an IN?" Hmmm???
>
>> I think the bigger problem is that folks are (apparently) taught
>> "keystrokes" instead of "concepts": type THIS to do THAT.
>
> There is a element of that. But also there is the fact that many who
> can DO cannot effectively teach.
Of course! SWMBO has been learning that lesson with her artwork.
Taking a course from a "great artist" doesn't mean you'll end up
learning anything or improving YOUR skillset.
> I knew someone who was taking a C programming course, 2 nights a week
> at a local college. After (almost) every class, he would come to me
> with questions and confusions about the subject matter. He remarked
> on several occasions that I was able to teach him more in 10 minutes
> than he learned in a 90 minute lecture.
But I suspect you had a previous relationship with said individual.
So, knew how to "relate" concepts to him/her.
Many of SWMBO's (female) artist-friends seem to have trouble grok'ing
perspective. They read books, take courses, etc. and still can't seem
to warp their head around the idea.
I can sit down with them one-on-one and convey the concept and "mechanisms"
in a matter of minutes: "Wow! This is EASY!!" But, I'm not trying to sell
a (fat!) book or sign folks up for hours of coursework, etc. And, I know
how to pitch the ideas to each person individually, based on my prior knowledge
of their backgrounds, etc.
>>> This pushes newbies to learn about pointers, machine addressing and
>>> memory management before many are ready. There is plenty else to
>>> learn without *simultaneously* being burdoned with issues of object
>>> location.
>>
>> Then approach the topics more incrementally. Instead of introducing
>> the variety of data types (including arrays), introduce the basic
>> ones. Then, discuss passing arguments -- and how they are COPIED into
>> a stack frame.
>
> A what frame?
>
> I once mentioned "stack" in a response to a question posted in another
> forum. The poster had proudly announced that he was a senior in a CS
> program working on a midterm project. He had no clue that "stacks"
> existed other than as abstract notions, didn't know the CPU had one,
> and didn't understand why it was needed or how his code was faulty for
> (ab)using it.
>
> So much for "CS" programs.
<frown> As time passes, I am becoming more convinced of the quality of
my education. This was "freshman-level" coursework: S-machines, lambda
calculus, petri nets, formal grammars, etc.
[My best friend from school recounted taking some graduate level
courses at Northwestern. First day of the *graduate* level AI
course, a fellow student walked in with the textbook under his
arm. My friend asked to look at it. After thumbing through
a few pages, he handed it back: "I already took this course...
as a FRESHMAN!"]
If I had "free time", I guess it would be interesting to see just what
modern teaching is like, in this field.
>> This can NATURALLY lead to the fact that you can only "return" one
>> datum; which the caller would then have to explicitly assign to
>> <whatever>. "Gee, wouldn't it be nice if we could simply POINT to
>> the things that we want the function (subroutine) to operate on?"
>
> Huh? I saw once in a textbook that <insert_language> functions can
> return more than one object. Why is this language so lame?
Limbo makes extensive use of tuples as return values. So, silly
not to take advantage of that directly. (changes the syntax of how you'd
otherwise use a function in an expression but the benefits outweigh the
costs, typ).
>> I just think the teaching approach is crippled. Its driven by industry
>> with the goal of getting folks who can crank out code, regardless of
>> quality or comprehension.
>
> You and I have had this discussion before [at least in part].
>
> CS programs don't teach programming - they teach "computer science".
> For the most part CS students simply are expected to know.
I guess I don't understand the difference.
In my mind, "programming" is the plebian skillset.
programming : computer science :: ditch-digging : landscaping
I.e., ANYONE can learn to "program". It can be taught as a rote skill.
Just like anyone can be taught to reheat a batch of ready-made cookie
dough to "bake cookies".
The CS aspect of my (EE) degree showed me the consequences of different
machine architectures, the value of certain characteristics in the design
of a language, the duality of recursion/iteration, etc. E.g., when I
designed my first CPU, the idea of having an "execution unit" started
by the decode of one opcode and CONTINUING while other opcodes were
fetched and executed wasn't novel; I'd already seen it done on 1960's
hardware.
[And, if the CPU *hardware* can do two -- or more -- things at once, then
the idea of a *program* doing two or more things at once is a no-brainer!
"Multitasking? meh..."]
> CSE programs are somewhat better because they [purport to] teach
> project management: selection and use of tool chains, etc. But that
> can be approached largely in the abstract as well.
This was an aspect of "software development" that was NOT stressed
in my curriculum. Nor was "how to use a soldering iron" in the
EE portion thereof (the focus was more towards theory with the
understanding that you could "pick up" the practical skills relatively
easily, outside of the classroom)
> Many schools are now requiring that a basic programming course be
> taken by all students, regardless of major. But this is relatively
> recent, and the language de choix varies widely.
I know every EE was required to take some set of "software" courses.
Having attended an engineering school, I suspect that was true of
virtually every "major". Even 40 years ago, it was hard to imagine
any engineering career that wouldn't require that capability.
[OTOH, I wouldn't trust one of the ME's to design a programming
language anymore than I'd trust an EE/CS to design a *bridge*!]
>> But you can still expose a student to the concepts of the underlying
>> machine, regardless of language. Introduce a hypothetical machine...
>> something with, say, memory and a computation unit. Treat memory
>> as a set of addressable "locations", etc.
>
> That's covered in a separate course: "Computer Architecture 106". It
> is only offered Monday morning at 8am, and it costs another 3 credits.
I just can't imagine how you could explain "programming" a machine to a
person without that person first understanding how the machine works.
Its not like trying to teach someone to *drive* while remaining
ignorant of the fact that there are many small explosions happening
each second, under the hood!
[How would you teach a car mechanic to perform repairs if he didn't
understand what the components he was replacing *did* or how they
interacted with the other components?]
>> My first "computer texts" all presented a conceptual model of a
>> "computer system" -- even though the languages discussed
>> (e.g., FORTRAN) hid much of that from the casual user.
>
> Every intro computer text introduces the hypothetical machine ... and
> spends 6-10 pages laboriously stretching out the 2 sentence decription
> you gave above. If you're lucky there will be an illustration of an
> array of memory cells.
>
> Beyond that, you are into specialty texts.
My first courses (pre-college) went to great lengths to explain the hardware
of the machine, DASD's vs., SASD's, components of access times, overlapped
I/O, instruction formats (in a generic sense -- PC's hadn't been invented,
yet), binary-decimal conversion, etc. But, then again, these were new ideas
at the time, not old saws.
>>> For general application programming, there is no need for a language
>>> to provide mutable pointers: initialized references, together with
>>> array (or stream) indexing and struct/object member access are
>>> sufficient for virtually any non-system programming use. This has
>>> been studied extensively and there is considerable literature on the
>>> subject.
>>
>> But then you force the developer to pick different languages for
>> different aspects of a problem. How many folks are comfortable
>> with this "application specific" approach to *a* problem's solution?
>
> Go ask this question in a Lisp forum where writing a little DSL to
> address some knotty aspect of a problem is par for the course.
>
>> E.g., my OS is coded in C and ASM. Most of the core services are
>> written in C (so I can provide performance guarantees) with my bogus
>> IDL to handle RPC/IPC. The RDBMS server is accessed using SQL.
>> And, "applications" are written in my modified-Limbo.
>
> What does CLIPS use?
Its hard to consider CLIPS's "language" to be a real "programming language"
(e.g., Turing complete -- though it probably *is*, but with ghastly syntax!).
Its bears the same sort of relationship that SQL has to RDBMS, SNOBOL to
string processing, etc. Its primarily concerned with asserting and retracting
facts based on patterns of recognized facts.
While you *can* code an "action" routine in it's "native" language, I
find it easier to invoke an external routine (C) that uses the API
exported by CLIPS to do all the work. In my case, it would be difficult
to code an "action routine" entirely in CLIPS and be able to access
the rest of the system via the service-based interfaces I've implemented.
> By my count you are using 6 different languages ... 4 or 5 of which
> you can virtually count on the next maintainer not knowing.
Yes. But I'm not designing a typical application; rather, a *system*
of applications, services, OS, etc. I wouldn't expect one language to
EFFICIENTLY tackle all (and, I'd have to build all of those components
from scratch if I wanted complete control over their own implementation
languages (I have no desire to write an RDBMS just so I can AVOID using
SQL).
> What would you have done differently if C were not available for
> writing your applications? How exactly would that have impacted your
> development?
The applications are written in Limbo. I'd considered other scripting
languages for that role -- LOTS of other languages! -- but Limbo already
had much of the support I needed to layer onto the "structure" of my
system. Did I want to invent a language and a hosting VM (to make it
easy to migrate applications at run-time)? Add multithreading hooks
to an existing language? etc.
[I was disappointed with most language choices as they all tend to
rely heavily on punctuation and other symbols that aren't "voiced"
when reading the code]
C just gives me lots of bang for the buck. I could implement all of this
on a bunch of 8b processors -- writing interpreters to allow more complex
machines to APPEAR to run on the simpler hardware, creating virtual address
spaces to exceed the limits of those tiny processors, etc. But, all that
would come at a huge performance cost. Easier just to *buy* faster
processors and run code written in more abstract languages.
>> This (hopefully) "works" because most folks will only be involved
>> with *one* of these layers. And, folks who are "sufficiently motivated"
>> to make their additions/modifications *work* can resort to cribbing
>>from the existing parts of the design -- as "examples" of how they
>> *could* do things ("Hey, this works; why not just copy it?")
>
> Above you complained about people being taught /"keystrokes" instead
> of "concepts": type THIS to do THAT./ and something about how that
> led to no understanding of the subject.
There's a difference between the types of people involved. I don't
expect anyone from "People's Software Institute #234B" to be writing
anything beyond application layer scripts. So, they only need to
understand the scripting language and the range of services available
to them. They don't have to worry about how I've implemented each
of these services. Or, how I move their application from processor
node 3 to node 78 without corrupting any data -- or, without their
even KNOWING that they've been moved!
Likewise, someone writing a new service (in C) need not be concerned with
the scripting language. Interfacing to it can be done by copying an
interface for an existing service. And, interfacing to the OS can as
easily mimic the code from a similar service.
You obviously have to understand the CONCEPT of "multiplication" in
order to avail yourself of it. But, do you care if it's implemented
in a purely combinatorial fashion? Or, iteratively with a bunch of CSA's?
Or, by tiny elves living in a hollow tree?
In my case, you have to understand that each function/subroutine invocation
just *appears* to be a subroutine/function invocation. That, in reality,
it can be running code on another processor in another building -- concurrent
with what you are NOW doing (this is a significant conceptual difference
between traditional "programming" where you consider everything to be a
series of operations -- even in a multithreaded environment!).
You also have to understand that your "program" can abend or be aborted
at any time. And, that persistent data has *structure* (imposed by
the DBMS) instead of being just BLOBs. And, that agents/clients have
capabilities that are finer-grained than "permissions" in conventional
systems.
But, you don't have to understand how any of these things are implemented
in order to use them correctly.
>> OTOH, if someone had set out to tackle the whole problem in a single
>> language/style... <shrug>
>
> It would be a f_ing nightmare. That's precisely *why* you *want* to
> use a mix of languages: often the best tool is a special purpose
> domain language.
But that complicates the design (and maintenance) effort(s) -- by requiring
staff with those skillsets to remain available. Imagine if you had to
have a VLSI person on hand all the time in case the silicon in your CPU
needed to be changed...
>>> The modern concept of availability is very different than when you had
>>> to wait for a company to provide a turnkey solution, or engineer
>>> something yourself from scratch. Now, if the main distribution
>>> doesn't run on your platform, you are likely to find source that you
>>> can port yourself (if you are able), or if there's any significant
>>> user base, you may find that somebody else already has done it.
>>
>> That works for vanilla implementations. It leads to all designs
>> looking like all others ("Lets use a PC for this!"). This is
>> fine *if* that's consistent with your product/project goals.
>> But, if not, you're SoL.
>
> Yeah ... well the world is going that way. My electric toothbrush is
> a Raspberry PI running Linux.
I suspect my electric toothbrush has a small MCU at its heart.
>> An advantage of ASM was that there were *relatively* few operators
>> and addressing modes, etc.
>
> Depends on the chip. Modern x86_64 chips can have instructions up to
> 15 bytes (120 bits) long. [No actual instruction *is* that long, but
> that is the maximum the decoder will accept.]
But the means by which the "source" is converted to the "binary" is
well defined. Different EA modes require different data to be present
in the instruction byte stream -- and, in predefined places relative to
the start of the instruction (or specific locations in memory).
And, SUB behaved essentially the same as ADD -- with the same range of
options available, etc.
[You might have to remember that certain instructions expected certain
parameters to be implicitly present in specific registers, etc.]
>>>> The (early) languages that we settled on were simple to implement
>>>> on the development platforms and with the target resources. Its
>>>> only as targets have become more resource-rich that we're exploring
>>>> richer execution environments (and the attendant consequences of
>>>> that for the developer).
>>>
>>> There never was any C compiler that ran on any really tiny machine.
>>
>> Doesn't have to run *on* a tiny machine. It just had to generate code
>> that could run on a tiny machine!
>
> Cross compiling is cheating!!!
>
> In most cases, it takes more resources to develop a program than to
> run it ... so if you have a capable machine for development, why do
> need a *small* compiler?
Because not all development machines were particularly capable.
My first project was i4004 based, developed on an 11.
The newer version of the same product was i8085 hosted and developed on
an MDS800. IIRC, the MDS800 was *8080* based and limited to 64KB of memory
(no fancy paging, bank switching, etc.) I think a second 8080 ran
the I/O's. So, building an object image was lots of passes, lots
of "egg scrambling" (the floppies always sounded like they were
grinding themselves to death)
I.e., if we'd opted to replace the EPROM in our product with SRAM
(or DRAM) and add some floppies, the product could have hosted the
tools.
> A small runtime footprint is a different issue, but *most* languages
> [even GC'd ones] are capable of operating with a small footprint.
>
> Once upon a time, I created a Scheme-like GC'd language that could do
> a hell of a lot in 8KB total for the compiler, runtime, a reasonably
> complex user program and its data.
>
>> E.g., we used an 11 to write our i4004 code; the idea of even something
>> as crude as an assembler running *ON* an i4004 was laughable!
>
> My point exactly. In any case, you wouldn't write for the i4004 in a
> compiled language. Pro'ly not for the i8008 either, although I have
> heard claims that that was possible.
I have a C compiler that targets the 8080, hosted on CP/M. Likewise, a
Pascal compiler and a BASIC compiler (and I think an M2 compiler) all
hosted on that 8085 CP/M machine.
The problem with HLL's on small machines is the helper routines and
standard libraries can quickly eat up ALL of your address space!
I designed several z180-based products in C -- but the (bizarre!)
bank switching capabilities of the processor would let me do things like
stack the object code for different libraries in the BANK section
and essentially do "far" calls through a bank-switching intermediary
that the compiler would automatically invoke for me.
By cleverly designing the memory map, you could have large DATA
and large CODE -- at the expense of lengthened call/return times
(of course, the interrupt system had to remain accessible at
all times so you worked hard to keep that tiny lest you waste
address space catering to it).
Sure we are! This is C.A.E! :> If we're talking about all
applications, then are we also dragging big mainframes into the mix?
Where's mention of PL/1 and the other big iron running it?
> You view everything through the embedded lens.
>
>> Ditto Pascal. How much benefit is there in controlling a motor
>> that requires high level math and flagrant automatic type conversion?
>
> I don't even understand this.
Motor control is a *relatively* simple algorithm. No *need* for complex
data types, automatic type casts, etc. And, what you really want is
deterministic behavior; you want to know that a particular set of
"instructions" (in a HLL?) will execute in a particular, predictable time
frame without worrying about some run-time support mechanism (e.g., GC)
kicking in and confounding the expected behavior.
[Or, having to take explicit measures to avoid this because of the choice
of HLL]
>> Smalltalk? You *do* know how much RAM cost in the early 80's??
>
> Yes, I do.
>
> I also know that I had a Smalltalk development system that ran on my
> Apple IIe. Unfortunately, it was a "personal" edition that was not
> able to create standalone executables ... there was a "professional"
> version that could, but it was too expensive for me ... so I don't
> know how small a 6502 Smalltalk program could have been.
>
> I also had a Lisp and a Prolog for the IIe. No, they did not run in
> 4KB, but they were far from useless on an 8-bit machine.
As I said, I id a lot with 8b hardware. But, you often didn't have a lot
of resources "to spare" with that hardware.
I recall going through an 8085 design and counting the number of
subroutine invocations (CALL's) for each specific subroutine.
Then, replacing the CALLs to the most frequently accessed subroutine
with "restart" instructions (RST) -- essentially a one-byte CALL
that vectored through a specific hard-coded address in the memory
map. I.e., each such replacement trimmed *2* bytes from the size of
the executable. JUST TWO!
We did that for seven of the eight possible RST's. (RST 0 is hard to
cheaply use as it doubles as the RESET entry point). The goal being to
trim a few score bytes out of the executable so we could eliminate
*one* 2KB EPROM from the BoM (because we didn't need the entire
EPROM, just a few score bytes of it -- so why pay for a $50 (!!)
chip if you only need a tiny piece of it? And, why pay for ANY of
it if you can replace 3-byte instructions with 1-byte instructions??)