Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Future tools should do less building and more analyzing

1 view
Skip to first unread message

John Herbster

unread,
Aug 11, 2007, 1:27:33 PM8/11/07
to

I think that modern computer programmers would be
better off if we could do more good fact-based
analysis and less "wondering".

The latest programming tools for have tended to be
more for building programs rather than for analyzing
programs. In this push to create tools for building
programs, we have also been overlooking significant
gaps in the tool capabilities (like the lack of
(a) a good variable type for designating a particular
time or doing arithmetic with time values, (b) fixed
and floating point decimal number variables that
could match our business calculations, and (c)
assignments that will warn us of loss of precision).

We need to get back to
(1) filling in the gaps in the capabilities of our
tools so that our tools could better fit the problem
and
(2) making better analysis tools that would tell us
why our programs are failing or might fail.

This could lead to a lot less wondering, especially
group wondering, more programmers that could solve
their own problems, and, who knows, maybe even a
newsgroup server, compiler, IDE, or TLB editor that
could diagnose its own problems.

JohnH, 2007-08-11

Paul Nichols [TeamB]

unread,
Aug 12, 2007, 2:05:37 AM8/12/07
to
John Herbster wrote:
> I think that modern computer programmers would be
> better off if we could do more good fact-based
> analysis and less "wondering".
>
Generally, this is what profilers are for.

What I find wrong with development today, is not the tools we have, but
the lack of IT shops to properly architect and scope an application out
before development begins.

Things have surely changed since I began my career in a professional IT
environment. I remember the days where the IT managers had actually
worked in IT. Application managers had actually written programs
themselves, so they understood the fundamentals of proper SDLC. They
understood the nuances of application development because they had
actually been there and done that.

Network managers had actualy been engineers and had actually built real
systems and understood what it meant to properly set up and spec out
things like latency issues, security, and system matrixes.

Today, I meet precious few IT managers who know an array from a map, a
function from a procedure, a class hierarchy from a copybook or lib,
etc. Yet these same managers, attempt to manage IT and IT Projects ( I
will include PMs here as well).

Therefore, once the basic business requirements are gathered, these
people believe the specs are adequate to begin development. BY this I
mean they stop at the BRD/FRD level and do not think about things like
architecting the actual programs with Sequence, Activity, State, and
Class diagrams. Of course years ago, UML was not used, but pick your
methodology here (flow charts, prototyping, flow charts, psuedo code,
etc). They do not want to spend the time doing these essential steps
and many times, will not even allow cross group architecture (where one
group controls one part of the application and another group the other
part or parts of the application). All they seem to care about is
pumping out applications and coming up with some time line estimate,
without doing the homework necessary to provide a realistic time line.

65% or more of development issues could be and would be sorted out, if
as part of the SDLC process, time was properly allocated for the design
process, before actual coding began. But with managers and PMs who
understand nothing about IT running the show, they do not understand why
this needs to be done and they view such activities as stalling or a
waste of development time and resources. IMHO, this is why 70% of
software [projects are overtime and over budget. It did not use to be
this way, so that of course is the basis of my opinion.

I remember the days when for a typical 12 month project, we would spend
3 months gathering business requirements, 6 months architecting the
design, and only 3-4 months actually writing the code. Why such an
apparent discrepancy? Because we knew exactly what we were coding, why,
and how. We knew what functions, classes, and procedures we needed to
develop, what external APIs we were going to use and why. We have real
design specs, blueprints for the application, not just business
requirements and we had developed these across all tiers where
integration was necessary. Sure, this process took time, but it saved in
actual development time and made maintenance so much easier. We rarely
went overtime and over budget, and the resulting code was clean. QA and
UAT Testing was dramatically reduced, since the application was coded
according to the specs to begin with. We did not have to discover what
the REAL application requiements were during the develpment process and
the "gotchas" were kept to a bare minimum, since design flaws were
fleshed out before coding actually began.

Having lived in both realms, I have to say I miss the "good ole days." I
am so sick and tired of having a BRD in front of me and then being
demanded to provide an estimate of how much time it will take to develop
X and how many resources I need, before I have a chance to do any true
architecture. It is much too common in today's IT shops and it means
more hours and more frustration. Unfortunately, too many of todays
developers do not know anything about proper planning and scoping out
projects, due to the failure to follow what would seem to be rational
steps to ensure success. It is not these developers fault either, since
they have never been under a system where this is considered the norm
rather than the exception.

We have better tools today than ever before, but we have less buggy code
being released. Why? I remember when there were very few IDEs, yet
the code was cleaner, tighter, and better integrated. Less tools, but
better management and planning was the key.

The old addage is always true, yet is rapidly becoming a lost art, "Fail
to plan, plan to fail." Too many today are failing to plan and the
results are obvious.

Joe Meyer

unread,
Aug 12, 2007, 3:10:40 AM8/12/07
to
being an old-timer in this business myself I couldn't agree more. Today's
software design often works like if a car manufacturer would draw the new
car on teh wall and let the mechanics start working. I'm sick too of all
these "I need an application doing this and that, what does it cost and how
long does it take?". But many IT shops also have this one underpaid guy (see
the link in John Jacobson's post "Atlas shrugged off") who works day and
night and manages to knit something together, sometimes good, somtimes bad.

Joe

Eric Grange

unread,
Aug 12, 2007, 3:54:39 AM8/12/07
to
> The old addage is always true, yet is rapidly becoming a lost art, "Fail
> to plan, plan to fail." Too many today are failing to plan and the
> results are obvious.

I have to concur, this has fallen sort of "out of fashion". All the time saving
and project saving decisions are IME made (or not made) in the very early days,
by knowing not just where you want to go, but also how you'll get there.

There are even "methodologies" that focus on getting things done first, however
dirty they may be done, and then hoping everything can at a later staged be
refactored into stability/performance/whatever (yes vanilla extreme programming,
I'm looking at you)... Of course at the later stage there is no time for
cleanup, not in the least because "getting things done however dirty" isn't a
time-saver in itself.

Eric

David Smith

unread,
Aug 12, 2007, 4:05:38 AM8/12/07
to
Paul Nichols [TeamB] wrote:
>
> The old addage is always true, yet is rapidly becoming a lost art, "Fail
> to plan, plan to fail." Too many today are failing to plan and the
> results are obvious.

One thing about planning is that for smaller projects, it's possible to
reach the end without proper planning and have a decent outcome. But
very quickly, as the project size increases, the lack of planning will
become a major problem.

We have one of these large projects in a bad state and as sad as it is,
the current managers or the young "architects" of the project, don't
seem to understand the importance of the planning phase. It was skipped
altogether and they see now that it was a mistake, but they don't seem
to understand that it was huge mistake and it may very well doom the
whole project.

And yes, we are using the very latest tools from our vendor. <Sigh>

David S.

Dennis Landi

unread,
Aug 12, 2007, 6:31:18 AM8/12/07
to

"Eric Grange" <egra...@SPAMglscene.org> wrote in message
news:46bebca8$1...@newsgroups.borland.com...

Totally agree, which is why "vanilla" Extreme Programming of the last decade
has been just another Emperor-with-no-clothes.

-d


Paul Nichols [TeamB]

unread,
Aug 12, 2007, 1:51:51 PM8/12/07
to

You are certainly not alone David. The last project I had (previous to
new position), worked my team 60-80 hrs a week for months. I wrote
documents, protested, etc. I tried to explain that the project was
woefully under planned and not even remotely scoped properly. The
response back was "NO excuses, we only want to see results!"

I lost two people on this project (very good and irreplaceable
developers) as a result. Surprised I did not lose more!


Ed

unread,
Aug 12, 2007, 9:46:36 PM8/12/07
to
I'm curious. Now that "it is a mistake" is known, how are they
going to solve the problem.

Also, pardon my ignorance, but what exactly is the 'planning phase'?
What does one plan for? How does one plan? I realize these questions
are probably answered during the academic phase in Computer Science,
but since I'm not a Comp.Sci. graduate, I'm curious as to how a
person such as I, be able to plan projects.

I have a project which has a function in the grand scheme
of things at work. I have the tools to complete this project. I'm
the only person who is developing the said project. What exactly
do I need to plan aside for the typical "which objects should
be developed to handle this"?

Edmund

Ed

unread,
Aug 12, 2007, 9:48:14 PM8/12/07
to
Is this all about ALM or profiling code?

Edmund

Martin Waldenburg

unread,
Aug 12, 2007, 11:17:13 PM8/12/07
to
Ed schrieb:

> Is this all about ALM or profiling code?

its about first starting the brain before working.
ALM aas it is nowadays is only an excuse not to use
the brain for what one got it, thinking.


Martin

Paul Nichols [TeamB]

unread,
Aug 13, 2007, 12:31:23 AM8/13/07
to
Ed wrote:
> David Smith wrote:
>> Paul Nichols [TeamB] wrote:
>>>
>>

> I'm curious. Now that "it is a mistake" is known, how are they
> going to solve the problem.
>

That's the problem, there is no way to solve the issue, if you do not
properly plan up front. The project scope is blown, time lines become
blurred and meaningless, and the developers are required to work
unreasonable hours rushing trying to take the stench and dissatisfaction
off of the ones who have made bad decisions to start with.

You basically get left with the task of trying to design and redesign as
well as constantly rewrite code as you understand exactly what you are
working with and what the real requirements and expectations were in the
first place. The code gets messy and you are left with spaghetti code
that is refactored and refactored and refactored.

> Also, pardon my ignorance, but what exactly is the 'planning phase'?
> What does one plan for? How does one plan? I realize these questions
> are probably answered during the academic phase in Computer Science,
> but since I'm not a Comp.Sci. graduate, I'm curious as to how a
> person such as I, be able to plan projects.
>

Well it always depends upon the project you are working on as to how
much planning you need to do, but basically the following paradigm will
work.

Planning Phase:

Step One: Talk with the unit desiring to have the application written.
This could be an internal busines customer or an external entity. Of
course, if you are the person writing an application for at large
distribution, you are the customer. Write these requirements down, and
reiterate the requirements as you understand them. Question anything you
are not clear on or about to make sure you understand exactly what the
customer is expecting the application to do and why. Generally I follow
a police investigative technique. What is the technique? Always ask the
following questions,

1. Who (who is it for, who is the intended audience?)
2. What (what you are writing, what does it do, what does it
need to integrate with, what tools/languages/databases,
OSes, App Servers, etc do I need?)
3. Where (where will the application be deployed? Will it be
deployed as a server based application, Client/Server,
on Unix, Linux, Solaris, App Server, etc.)
4. When (When does it need to be in service, When does all of
the functionality need to be put into place, can we separate the
functionality into separate releases?)

5. How (How do I go about developing this app. What
languages/tools/databases do we need to develop for, how
do I integrate with other sources, how do I class this
application, how do I develop and break out
functionality, etc.)

If you ask these questions and are diligent about answering all 5, you
will generally come out with the information you need.

Once you get the answers above, you start the formal design process.
Usually the above answers come in the form of Use Cases, either a Use
Case diagram or some other similar Use Case methodology (like bullet
points or flow charts. With Databases, this will usually consists of an
ERD (Entity Relational Diagram).

Once you have a system of Use Cases in a formal document, share this
with the business users or client(s). Make sure that they understand
what your understanding is and add or modify as they see the initial
requirements that need to be modified. If you are lucky, you will have
Business Analyst or PM that will already scoped this out for you, so you
may or may not be involved in the initial steps per above. However,
usually even if I have a BA or PM that gathers initial requirements
(usually in a Business Requirements Document or BRD), I will usually put
this in Use Cases for my team.

If the business or customers have a specific time they need an
application, you may find that you have to trim the requirements and
employ a phased rollout approach. By this I mean that you set
application priorities where you define what are the must have
functionalities in an initial release, with the understanding that added
functionality will come in subsequent releases.

Once these requirements are well documented and signed off on (very
important, if the customer/business does not sign off, you are inviting
and even encouraging scope creep),you start the process of modeling. How
involved the modeling process is developed, will depend on the
complexity of the application itself.

Usually, I try to create an effective class model, modeling the base
classes first. For more complex parts of the application process, I may
use Sequence, Acitvity, or State diagrams. You do not necessarily have
to go the UML route, but with a good UML tools (like Togethersoft), you
can actually use your model to create the core code, which serves two
purposes.

(1)It is easier to find flaws when writing good models. Using UML for
this design process, you are not wasting effort by using something like
flow charts or even psuedo code (which may be necessary as well for
complex parts), due to the fact that the model can actually be used to
create the actual code itself.

(2) Your model becomes a self documenting core code based. This is
extremely useful for integration of existing code into a new or
extended application.

Once these models are establish, you can start handing it off to your
team. Now they have a good model from which to work from and good
requirements from which to work. This modeling process needs to be
performed, not only from your own application, but from all of the
programs you will be working with or integrating with as well.

For instance, if your application is integrating with an existing
databases, you need to have a working ERD Model. If you are integrating
with Web Services, you need good documentation explaining how these
services are suppose to work and how you call them, what the data types
are, etc Sequence diagrams are great here, but if you do not have
these, at the least some documentation and the XSDs will come in handy.

Once you have undertaken this formal process, you can give a good
picture of how long and how many resources you will need to complete the
program in x hours. You should, during this process, flesh out any
potential problem areas as well. For instance, if the business rules say
that I must calculate taxes using a Web Services for each state and
locale for an Order Entry System, then I should be able to see potential
problems when the locale are overlapping. Take Bristol TN and Bristol VA
as an example. The city actually resides in two states and has two
locales. Taxes could be a problem to calculate unless the locale is
specifically identified and some lookup routine would identify which
Bristol we would be dealing with to calculate taxes. This small detail
might be overlooked in an area where the development is far removed from
these types of scenarios. You could potentially code the application not
taking this into account and only discover at QA, UAT, or Production
that you have missed a major potential problem. Proper design and
discussion would probably have caught this using the Where, When, How,
and Why model of investigation.

> I have a project which has a function in the grand scheme
> of things at work. I have the tools to complete this project. I'm
> the only person who is developing the said project. What exactly
> do I need to plan aside for the typical "which objects should
> be developed to handle this"?
>
>

Use the investigative methodoloogy method (Who, What, When, Where, and
How) and you should be able to determine the planning process. If you
are writing a brand new application, that does not need to integrate
with any other app. you should still plan, but the planning is not as
critical once you get passed the Use Case stage. However, if your
application is integrating with other applications and back end systems,
you need to get with these groups and determine what type of planning
and documentation would be beneficial to you and to them.

Remember the application you write today will probably need to be
expanded and updated for years. You may not be the one doing the
maintenance years from now or you may indeed be the one who hasn't even
thought about this code for years.


Hope this helps!!


Ed

unread,
Aug 13, 2007, 12:50:50 AM8/13/07
to
Paul Nichols [TeamB] wrote:
>
> You basically get left with the task of trying to design and redesign as
> well as constantly rewrite code as you understand exactly what you are
> working with and what the real requirements and expectations were in the
> first place. The code gets messy and you are left with spaghetti code
> that is refactored and refactored and refactored.
>

I can see how this issue may not be resolvable.

> Well it always depends upon the project you are working on as to how
> much planning you need to do, but basically the following paradigm will
> work.
>
> Planning Phase:
>
> Step One: Talk with the unit desiring to have the application written.

Had that talk once for a project. The unit desiring this application
apparently decided to change the format and/or requirements without
mentioning it to me. While i didn't need to rewrite the code, but
I had to change the code because the format changed.

Now, as much as I'm furious about it, is that the idiots requiring
the program is actually expecting *me* to run it. Basically I
wasted *my* time creating some sort of UI when I could've
just slapped a damn script and ran it.


> Hope this helps!!

Thanks Paul! Very much appreciated and definitely gives me
a starting point to understand how to do a project correctly.
Should also be part of a programming FAQ.

Edmund

Francois Malan

unread,
Aug 13, 2007, 3:31:12 AM8/13/07
to
Paul Nichols [TeamB] wrote:

> John Herbster wrote:
> > I think that modern computer programmers would be better off if we
> > could do more good fact-based analysis and less "wondering".
> >
> Generally, this is what profilers are for.
>
> What I find wrong with development today, is not the tools we have,
> but the lack of IT shops to properly architect and scope an
> application out before development begins.
>


Totally agree. It angers me when as a developer I am asked to make time
projections and resource estimations without having been made part of
the initial analysis and design process.

To tie in with John, I think his point is valid. Often a developer will
estimate, say 6 weeks for development. During testing it is dicovered
that (maybe) the numerical values are off. Now starts the debugging.
Even though the code and formulas might be correct, the lack of exact
numerical types for instance will be the cause of error and not the
code itself. This can lead to *lengthy* fruitless debugging sessions if
the developer does not know this. Having tools to do a deeper analysis
can help shorten these fruitless debugging excersises.

--

John Herbster

unread,
Aug 12, 2007, 10:26:36 PM8/12/07
to

"Ed" <e...@kdtc.net> wrote

> Is this all about ALM or profiling code?

Edmund,
I intended it to be about writing and maintaining
code that better fits the problem and which could
be made more reliable. But apparently there is
still a lot of ALM interest hanging around.
Rgds, JohnH

Anders Isaksson

unread,
Aug 13, 2007, 8:04:33 AM8/13/07
to
John Herbster wrote:

> we have also been overlooking significant
> gaps in the tool capabilities (like the lack of
> (a) a good variable type for designating a particular
> time or doing arithmetic with time values, (b) fixed
> and floating point decimal number variables that
> could match our business calculations, and (c)
> assignments that will warn us of loss of precision).

All three are 'commercially hopeless'. As a tool producer, CodeGear
can't come now and say "we did it all wrong all those years, and BTW,
we have also helped YOU to do it all wrong..."

As for the floating point 'problem', I think you must start already at
the education end - you *must* make people understand that computers
don't work with 'theoretical mathematical concepts' but with a limited
number of binary bits. The concept of limited precision must be taught,
not just mentioned once...

Both (a) and (b) can be solved partly with (community created?)
libraries, but the solution really needs to go into the product to
'force' everyone to used the same library. But Borland/CodeGear has
already shown us that they are happy to include community projects into
the product if they are good enough - just get the projects started,
finished and verified, and they might well get into Delphi.

> This could lead to a lot less wondering, especially
> group wondering, more programmers that could solve
> their own problems,

Assuming there is documentation, clearly stating what the different
concepts are good for, where they should be used (and where they
shouldn't), etc.

Also assuming the average IDE user is able to read (and understand)
that documentation...

--
Anders Isaksson, Sweden
BlockCAD: http://web.telia.com/~u16122508/proglego.htm
Gallery: http://web.telia.com/~u16122508/gallery/index.htm

Craig Stuntz [TeamB]

unread,
Aug 13, 2007, 9:44:33 AM8/13/07
to
Paul Nichols [TeamB] wrote:

> John Herbster wrote:
> > I think that modern computer programmers would be better off if we
> > could do more good fact-based analysis and less "wondering".
> >
> Generally, this is what profilers are for.

Not entirely. I suspect (knowing John) that John refers not just to
speed or coverage or leak checking and the like but to /correctness/ of
code. Profilers don't really do that. Unit testing and the like can
help, but for more thorough testing of this sort you want a prover or
something like QuickCheck, if your code will support it.

--
Craig Stuntz [TeamB] · Vertex Systems Corp. · Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz
Useful articles about InterBase development:
http://blogs.teamb.com/craigstuntz/category/21.aspx

Wayne Niddery [TeamB]

unread,
Aug 13, 2007, 10:57:11 AM8/13/07
to
Ed wrote:
>>
>> Step One: Talk with the unit desiring to have the application
>> written.
>
> Had that talk once for a project. The unit desiring this application
> apparently decided to change the format and/or requirements without
> mentioning it to me.

That's where sign-off becomes important. Get the customer to agree in
writing with the use case requirements you've prepared based on your talks
with them. That does *not* mean requirements won't change - indeed it is a
rare project where they do not change or expand as it is developed, but by
planning and documenting those plans, it gives you a basis for handling
those changes by being better able to identify what the impact the changes
will have on the current design and implementation, and how much time/effort
the change will cost.

Some methods, many advocated as part of Extreme Programming for example,
properly recognize the almost inevitable requirements changes that projects
go through, and thus the need to be able to handle those changes as a normal
part of a project cycle, and thus the proper need for an iterative
development process.

But some use that as an excuse to declare any planning of more than a week
or two of work to be futile and to be avoided. This results in a very
valuable programming practice, refactoring, being elevated to the constant
necessity in one's daily work - since the goal is to do the least coding to
meet the sparse requirements set out for that one or two week cycle, a much
larger part of the code base will need to be refactored many times before a
project is complete.

What I think many miss is that refactoring no less applies to *requirements
and programming specifications* as it does to actual code. Just like finding
a flaw in the design stage is much cheaper than finding and fixing it in the
development/QA phases, so refactoring at the design and specification stage
is cheaper than constantly refactoring actual code, and does *much* more to
keep an entire code base coherent and consistent, and thus more manageable
and maintainable.

--
Wayne Niddery - Winwright, Inc (www.winwright.ca)
"In a tornado, even turkeys can fly." - unknown


Q Correll

unread,
Aug 13, 2007, 1:34:23 PM8/13/07
to
Paul,

| Hope this helps!!

Nice dissertation.

--
Q

08/13/2007 10:33:59

XanaNews Version 1.17.5.7 [Q's salutation mod]

Q Correll

unread,
Aug 13, 2007, 1:43:20 PM8/13/07
to
Ed,

| Now, as much as I'm furious about it, is that the idiots requiring

| the program is actually expecting me to run it. Basically I
| wasted my time creating some sort of UI when I could've


| just slapped a damn script and ran it.

Personally, I think you did the right thing. I've BT, DT, and made the
"do it right" UI decision even though I knew beforehand that I would be
"running the job." You now have an app with a UI that can be run by
others if you aren't around. (Ill, vacations, move on to another task
or job, etc.,.) It's the professional thing to have done. Even if you
could have "winged it" easier.

Don't expect any appreciation, either. Jobs such as this are often
only appreciated by ourselves in the knowledge of having "done it
right."

--
Q

08/13/2007 10:35:20

Q Correll

unread,
Aug 13, 2007, 1:44:35 PM8/13/07
to
Wayne,

| What I think many miss is that refactoring no less applies to
| *requirements and programming specifications* as it does to actual
| code. Just like finding a flaw in the design stage is much cheaper
| than finding and fixing it in the development/QA phases, so
| refactoring at the design and specification stage is cheaper than

| constantly refactoring actual code, and does much more to keep an


| entire code base coherent and consistent, and thus more manageable
| and maintainable.

On the mark!

--
Q

08/13/2007 10:44:12

Paul Nichols [TeamB]

unread,
Aug 13, 2007, 10:39:24 PM8/13/07
to
Craig Stuntz [TeamB] wrote:
> Paul Nichols [TeamB] wrote:
>
>> John Herbster wrote:
>>> I think that modern computer programmers would be better off if we
>>> could do more good fact-based analysis and less "wondering".
>>>
>> Generally, this is what profilers are for.
>
> Not entirely. I suspect (knowing John) that John refers not just to
> speed or coverage or leak checking and the like but to /correctness/ of
> code. Profilers don't really do that. Unit testing and the like can
> help, but for more thorough testing of this sort you want a prover or
> something like QuickCheck, if your code will support it.
>
Well that's true, if that is what he is referring to.

Personally, I believe, this is where coding standards come into play,
but like the planning stage is usually skipped these days. Remember
coding standards and code walk-throughs?

I will look at QuickCheck however.

Thanks

John Herbster

unread,
Aug 14, 2007, 11:11:04 AM8/14/07
to

"Paul Nichols [TeamB]" <pa...@none.com> wrote

>>>> ... computer programmers would be could do more
>>>> good fact-based analysis and less "wondering". ...

>>> Generally, this is what profilers are for.

>> Not entirely. I suspect (knowing John) that John refers
>> not just to speed or coverage or leak checking and the

>> like, but to /correctness/ of code. Profilers don't


>> really do that. Unit testing and the like can help, but
>> for more thorough testing of this sort you want a
>> prover or something like QuickCheck, if your code will
>> support it.

Features like madExcept and ExceptionMagic, should be
built-ins, not add-ins. Stack overflow, like can be
caused by recursion errors, should not cause the program
to just disappear. The compiler should check for and
issue recursion warnings and maybe even calculate the
max stack size required.

We need tools (or rules that can be followed) to make a
program that will guaranteed not to have memory
fragmentation problems.

But more than that, we need the features in the language
to include analogs of the variables of the world that we
are trying to program for. Here are some examples:

We need time variables that allow us to exactly represent
common civil times (including time-zone) and allow
increments of time which are exact.

We need variable types that can represent decimal fraction
numbers integrated into our programming tools, which would
allow us to match our business data rules, especially for
roundings.

We need compiler warnings when precision can be lost
during assignment statements.

> Well that's true, if that is what he is referring to.
> Personally, I believe, this is where coding standards
> come into play, but like the planning stage is usually
> skipped these days. Remember coding standards and code
> walk-throughs?

They were casualties of RAD.

Regards, JohnH

Q Correll

unread,
Aug 14, 2007, 1:10:00 PM8/14/07
to
John,

| ... and maybe even calculate the max stack size required.

How does one calculate infinity? ;-)

--
Q

08/14/2007 10:09:25

Craig Stuntz [TeamB]

unread,
Aug 14, 2007, 1:36:28 PM8/14/07
to
Q Correll wrote:

> > ... and maybe even calculate the max stack size required.
>
> How does one calculate infinity? ;-)

Unnecessary. The max stack size available to the app is known at
compile time.

The harder part is state. Unless your code is purely functional, you
can't really know the max stack size.

--
Craig Stuntz [TeamB] · Vertex Systems Corp. · Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz

All the great TeamB service you've come to expect plus (New!)
Irish Tin Whistle tips: http://learningtowhistle.blogspot.com

John Herbster

unread,
Aug 14, 2007, 4:16:48 PM8/14/07
to

"Craig Stuntz [TeamB]"

> > > ... maybe even calculate the max stack size required.


> > How does one calculate infinity? ;-)

> The harder part is state. Unless your code is purely
> functional, you can't really know the max stack size.

I am not sure what "purely functional" means.

I suspect that it may be possible to calculate the stack
required by building a tree with required sizes starting
the leaves that don't call any thing and working back to
the events that can call them.

--JohnH

Craig Stuntz [TeamB]

unread,
Aug 14, 2007, 4:25:29 PM8/14/07
to
John Herbster wrote:

> I am not sure what "purely functional" means.

Entirely without state; the output of each function is completely
determined by the values of the arguments.

Q Correll

unread,
Aug 14, 2007, 5:08:33 PM8/14/07
to
Craig,

| | How does one calculate infinity? ;-)
|
| Unnecessary. The max stack size available to the app is known at
| compile time.

I know. I was just being a wise*ss. See the "wink?" <g>

--
Q

08/14/2007 14:07:14

John Herbster

unread,
Aug 14, 2007, 7:44:35 PM8/14/07
to

"Q Correll" <qcor...@pacNObell.net> wrote

> I know. I was just being a wise*ss. See the "wink?" <g>

Q, I was not sure. Further, I suspect, but am
not sure that the problem has a practical solution.

What are your comments on my other statements,
copied below?

>> But more than that, we need the features in the language
>> to include analogs of the variables of the world that we
>> are trying to program for. Here are some examples:

>> We need time variables that allow us to exactly represent
>> common civil times (including time-zone) and allow
>> increments of time which are exact.

>> We need variable types that can represent decimal fraction
>> numbers integrated into our programming tools, which would
>> allow us to match our business data rules, especially for
>> roundings.

>> We need compiler warnings when precision can be lost
>> during assignment statements.

Regards, JohnH

Q Correll

unread,
Aug 14, 2007, 9:32:43 PM8/14/07
to
John,

| What are your comments on my other statements,
| copied below?
|
| | | But more than that, we need the features in the language
| | | to include analogs of the variables of the world that we
| | | are trying to program for. Here are some examples:
|
| | | We need time variables that allow us to exactly represent
| | | common civil times (including time-zone) and allow
| | | increments of time which are exact.

Hmmm,... I've always written my own time functions when I need
something like that. I guess it never even occurred to me that it
should be in the language-compiler package. <g>

| | | We need variable types that can represent decimal fraction
| | | numbers integrated into our programming tools, which would
| | | allow us to match our business data rules, especially for
| | | roundings.

Yes! I've mentally "grumped" about that for many years. Back in the
very early Turbo Pascal days I even wrote my own decimal arithmetic
functions.

| | | We need compiler warnings when precision can be lost
| | | during assignment statements.

Cool tool. (Even though I would probably eventually turn it off. <g>
I've always been [painfully] aware of the limitations of binary and
fixed-length arithmetic.)


--
Q

08/14/2007 18:25:22

Craig Stuntz [TeamB]

unread,
Aug 15, 2007, 8:59:17 AM8/15/07
to
One other bit on this. I haven't looked recently, but I don't think
Delphi does tail recursion optimization. In all seriousness, that would
certainly help with "infinite" recursion.

IOW, you could write a message handling loop like this:

procedure HandleMessage;
begin
case Message of
message1: DoMessage1;
// ..
end;
HandleMessage;
end;

This would be optimized into an infinite while loop.

--
Craig Stuntz [TeamB] · Vertex Systems Corp. · Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz

How to ask questions the smart way:
http://www.catb.org/~esr/faqs/smart-questions.html

Q Correll

unread,
Aug 15, 2007, 12:40:18 PM8/15/07
to
Craig,

| This would be optimized into an infinite while loop.

My head hurts. <g>

--
Q

08/15/2007 09:40:04

Craig Stuntz [TeamB]

unread,
Aug 15, 2007, 12:48:09 PM8/15/07
to
Q Correll wrote:

> > This would be optimized into an infinite while loop.
>
> My head hurts. <g>

It's not that hard. Since (1) the procedure doesn't require anything
from the caller's stack* and (2) there is no code after the call to the
procedure, you can pop the stack before making the call. No stack means
it's not really recursing anymore.

-Craig

* There may be an implicit Self. But that's compiler magic anyway.

--
Craig Stuntz [TeamB] · Vertex Systems Corp. · Columbus, OH
Delphi/InterBase Weblog : http://blogs.teamb.com/craigstuntz

Q Correll

unread,
Aug 15, 2007, 2:51:20 PM8/15/07
to
Craig,

| ...and (2) there is no code after the call to the


| procedure, you can pop the stack before making the call. No stack
| means it's not really recursing anymore.

Ah. That works.

My head is much better now. Thanks! <g>

--
Q

08/15/2007 11:50:26

0 new messages