Are you guys trying to find a solution to your issue
by first looking if you can find as many "design patterns" as you
can find and then try to stick as many of them into your code,
as you can manage?
Is THAT the central idea about modern programming techniques?
--
Programmer's Goldmine collections:
Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
You sound like a troll.
Certain design patterns are useful tools for solving certain
programming problems. I use them as I see fit.
And THAT is how you START?
>Certain design patterns are useful tools for solving certain
>programming problems.
:--}
> I use them as I see fit.
:--}
In order to make an educated choice, in both cases you need to know a
number of alternatives, i.e. strong points and drawbacks of design
patterns for the task at hand as well as you need this for algorithms.
I did not check the web site you gave, but I assume its a kind of
collection or catalog of design patterns - of course it is not a good
idea to stick in design patterns into writing an application
indiscriminately but a catalog is a good knowledge base for not having
to invent the wheel over and over again.
best,
MiB
> Are you guys trying to find a solution to your issue by first
> looking if you can find as many "design patterns" as you can
> find and then try to stick as many of them into your code, as
> you can manage?
Maybe some are, but I've not encountered any. I have
encountered a lot of programmers who prefer reinventing known
solutions rather than using existing ones. In the end, using
design patterns is just using a common language for talking
about existing solutions, so you don't have to reinvent known
solutions each time around. (The common vocabulary is extremely
useful for documentation purposes.)
> Is THAT the central idea about modern programming techniques?
The central idea about modern programming techniques is to
produce error free, maintainable software as cheaply as
possible. Using known solutions, when applicable, is an
effective technique for that.
--
James Kanze
Absolutely. In fact, for my undergrad thesis I'm working on a
"problem description language" that will take your problem/goal
and re-express it in design patterns. It will then write the
program by mechanically applying the design patterns.
But I'm having a small problem finishing it. Anyone know of a
pattern for the above problem?
--Jonathan
Greets
Interesting implications there. Almost brings a religious context to
the whole discussion. (That is, are humans "simple" chemical machines,
or do we possess a "soul"?) Suffice to say, you are greatly
simplifying the issues involved and jumping the gun.
This is already so off-topic, but I suggest reading some good books on
evolution by natural selection. Dawkin's The Greatest Show On Earth
does a remarkably good job describing evolution by natural selection,
specifically how evolution by natural selection may be the only known
natural process which creates information in a local open system, the
only process which creates information which is not intelligent
design. "The non-random survival of randomly varying replicators."
Throw on a couple good books of information theory and entropy for
good measure.
On the flip side, who ever proved that humans are "creative"? Or any
moreso than a really good computer AI (which has not yet been made)?
I think that this does not have to do anything with soul.
Fact is that algorithm cannot think and that;s it.
Human consciousness and intelligence does not works
on algorithm. Plain fact. We can invent algorithm,
but algorithm itself can;t produce previously
unknown algorithm. But human brain can.
This is mathematical fact....
>
> This is already so off-topic, but I suggest reading some good books on
> evolution by natural selection.
> Dawkin's The Greatest Show On Earth
What does this topic have to do with evolution?
Greets
Well, I am not against the design patterns in principle.
But what I DO see all over the place is a literal obscession.
That web page used two design patterns for a single thing.
I do not argue whether it IS the way to go or not.
But that looked like an extremism to me, just from glancing at it.
>best,
> MiB
Thanx God!
I started feeling I am in a dreamland.
:--}
>--
>James Kanze
>Absolutely.
Well, what was what I was afraid of to tell you the truth.
> In fact, for my undergrad thesis I'm working on a
>"problem description language" that will take your problem/goal
>and re-express it in design patterns. It will then write the
>program by mechanically applying the design patterns.
>
>But I'm having a small problem finishing it. Anyone know of a
>pattern for the above problem?
:--}
Well, unfortunatly I do not have a reference to the article
by that French professor and I am not sure it is going to be
as encouraging for your trip, as you might expect otherwise.
>--Jonathan
I like this one:
"While GPS solved simple problems such as the Towers of Hanoi that could be
sufficiently formalized, it could not solve any real-world problems because
search was easily lost in the combinatorial explosion of intermediate states."
And that is EXACTLY what that French professor said
about 20 years ago if I recall correctly.
:--}
"combinatorial EXPLOSION".
Nothing less.
Yep.
>It is based on proof that algorithm for proofs can;t possibly exits
>, too...
>http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems
>That's why blue brain project is bound to fail.
>Because anything which is based on algorithm cannot
>be creative...
Correct.
>Greets
How bout this one:
"There are no closed systems. So the issue of entropy does not apply".
>On the flip side, who ever proved that humans are "creative"? Or any
>moreso than a really good computer AI (which has not yet been made)?
AI is just a myth.
How can you possibly create an ARTIFICIAN intelligence
if you don't even know how natural, and that is biological,
intelligence "works"?
AI is simply trying to copycat that, which alredy exists
in biological world.
It's OK; I have a design pattern for changing my expectations.
--Jonathan
Is it? Could you cite a published something which claims this? I
disagree with most of what you said. I do not agree that "algorithms
cannot think", nor "human consciousness and intelligence does not work
on an algorithm". Please go educate yourself some more, possibly
reading up on the Turing Test, and related thought experiments.
Also, how do you define "intelligence"? Something like the Turing
Test? What about the common thought experiment of the proverbial guy
in a big room running a very long algorithm, looking up through
millions and millions of pages of responses. Is "the room"
intelligent? Is the paper intelligent? Is there a difference between
the system and the constituent parts?
All of this is far from accepted fact, your stance or mine.
> > This is already so off-topic, but I suggest reading some good books on
> > evolution by natural selection.
> > Dawkin's The Greatest Show On Earth
>
> What does this topic have to do with evolution?
Our brain is a simple "algorithm", using the loosest definition of
algorithm. It may not be determinalistic, but there's no "magic" which
makes it something other than a chemical machine. (At least, that's my
world view.) (Where most people call that magic a "soul".) Evolution
explains how such a complex, interesting, and powerful algorithm came
to be.
http://www.fact-index.com/m/ma/mathematical_logic.html
http://www.fact-index.com/s/se/second_order_logic.html
I
> disagree with most of what you said. I do not agree that "algorithms
> cannot think", nor "human consciousness and intelligence does not work
> on an algorithm". Please go educate yourself some more, possibly
> reading up on the Turing Test, and related thought experiments.
>
> Also, how do you define "intelligence"? Something like the Turing
> Test?
Intelligence is capability to find algorithm to solve some
problem. Therefore, if it is algorithm, it should be algorithm
that produces algorithm to solve particular problem.
So result would be algorithm. But since there is no algorithm
to proof validity of second order logic formulas, solution
is not based on algorithm, rather on intuition.
What about the common thought experiment of the proverbial guy
> in a big room running a very long algorithm, looking up through
> millions and millions of pages of responses. Is "the room"
> intelligent? Is the paper intelligent? Is there a difference between
> the system and the constituent parts?
>
> All of this is far from accepted fact, your stance or mine.
>
>>> This is already so off-topic, but I suggest reading some good books on
>>> evolution by natural selection.
>>> Dawkin's The Greatest Show On Earth
>> What does this topic have to do with evolution?
>
> Our brain is a simple "algorithm", using the loosest definition of
> algorithm. It may not be determinalistic, but there's no "magic" which
> makes it something other than a chemical machine. (At least, that's my
> world view.) (Where most people call that magic a "soul".) Evolution
> explains how such a complex, interesting, and powerful algorithm came
> to be.
I think that atheists are deluded by algorithmic machines into believing
that brain is such machine. From that point of view atheists are just
another form of religion, which leads science in wrong direction.
Greets
Sorry, I'd like to stay away from this, but can not.
Intelligence is NOT, and never EVER will be
"a capability to find algorithm to solve some problem"
This is the HIGHER order insult to Intelligence.
That is ALL I am interested in saying or even seeing
in THIS grade of crap.
Enough.
--
Enough. ;)
What a impressive argument!
Greets
Technically, that was my point too. Thus far, we haven't made any
reasoned arguments. tanix, Branimir Maksimovic, and myself have just
been spitting out definitions, or axioms, without any sort of coherent
argument. We've just made the claim that "Intelligence is defined as
X, and I am right". However, at least I mentioned that I was doing
that.
This is going to quickly devolve into an argument over "correct
definitions" or an argument of religion, neither of which I'm keen to
discuss, so I will leave it at that. (I do wish that instead we could
try to distill down the core of what most people mean when they say
"intelligence", but I fear that such a discussion is also off topic,
and none of the people in the discussion are well enough educated to
have such a discussion, myself included.)
It is not uncommon practice to mix several design patterns.
Note, using a design pattern does not imply you are talking about
every aspect of an application.
For example, assume you need to do the architecture for a application
that shall have a GUI and needs to access a central database in a
distributed system. You may end up with a design that separates the
internal representation of data from how it is represented in the GUI,
plus some (again separate) mechanism to react on user input from the
GUI and propagate business logic activity to state changes in the GUI.
If you talk to some other expert about this approach it may be easier
to just say you are planning for a Model-View-Controller design
pattern, and usually he will understand what you mean without the need
to explain the details, many of which may not yet be fixed anyway.
For the access to the central database you may find it a good idea not
to access the database directly, but layer an application server in
between and already you use a second pattern "Three-Tier".
The connection to the application server may be encapsulated in its
own class and for efficiency you want to a) make sure it is only
established if actually needed, b) is shared in different parts of
your application, and c) at most one object is created; you'll
probably end up with the "Singleton" pattern.
At no time you are limited to use an existing design pattern, if you
find something unique that is better suited for the problem at hand,
use it by all means. However, if it proves useful, you should make a
note on why you chose this special design and how the parts fit
together - maybe it can be reused for a different problem later and
become a design pattern for you.
I would like to recommend you two books; there are a number of other
good books on the topic available already but these I liked most:
E. Gamma, R. Helm, R. Johnson, J. Vlissides: "Design Patterns -
Elements of Reusable Object-Oriented Software", Addison-Wesley: 1995.
M. Fowler: "Patterns of Enterprise Application Architecture", Addison-
Wesley: 2003.
Especially the Fowler book changed my view on software development; I
rarely encountered similar eye-openers before - Stroustrup's "The C++
programming language"; not the language description part, but the
later chapters about programming paradigms; "Gödel, Escher, Bach" by
D. Hofstaedter was another one for me.
best,
MiB
This is not a fact, but an open question in artificial intelligence
research.
> Human consciousness and intelligence does not works
> on algorithm.
Assertion without evidence.
Quantum physicists believe in a finite state universe. If consciousness
is embedded in a finite-state universe then it means it's part of a
finite state machine, ergo ...
But that is not so interesting; what's more provoking is the possibility
that consciousness could be encoded in a lot fewer states.
> Plain fact. We can invent algorithm,
Not all of us, just a small minority.
> but algorithm itself can;t produce previously
> unknown algorithm.
Obviously, an algorithm whose purpose isn't algorithm invention doesn't
invent algorithms.
Genetic programming is an concrete example of algorithms inventing
algorithms.
> This is mathematical fact....
ROFL.
>> Human consciousness and intelligence does not works
>> on algorithm.
>
> Assertion without evidence.
Evidence is that in mathematic there is no algorithm to proof valid
logic formula.
>
> Quantum physicists believe in a finite state universe. If consciousness
> is embedded in a finite-state universe then it means it's part of a
> finite state machine, ergo ...
This is also assertion without proof. Fact is that set of all
valid second order logic formulas is not even recursively enumerable set.
>
> But that is not so interesting; what's more provoking is the possibility
> that consciousness could be encoded in a lot fewer states.
No one knows what is consciousness yet...
>
>> Plain fact. We can invent algorithm,
>
> Not all of us, just a small minority.
Everybody invents algorithm , that does not
have to be algorithm for computers...
For example simple algorithm to shop some
thing...
>
>> but algorithm itself can;t produce previously
>> unknown algorithm.
>
> Obviously, an algorithm whose purpose isn't algorithm invention doesn't
> invent algorithms.
>
> Genetic programming is an concrete example of algorithms inventing
> algorithms.
Genetic programming ? Could you provide example algorithm
creating algorithm, that is not in no way encoded in that algorithm?
>
>> This is mathematical fact....
>
> ROFL.
?
Greets
Sorry to interfere here, but I can tell you that those
that experience it "know" what it is.
Not sure if you heard of such a thing as awareness.
Sure, as far as science goes, they do not know what consciousness is,
and it is not even in the cards. It like knowing God or Truth,
which is simply WAY out of scope of this domain.
Yes, you CAN have some taste of it.
But to know it, you have to be it, nothing less.
Hope you don't mind THAT much.
>>> Plain fact. We can invent algorithm,
>>
>> Not all of us, just a small minority.
>
>Everybody invents algorithm , that does not
>have to be algorithm for computers...
>For example simple algorithm to shop some
>thing...
>
>>
>>> but algorithm itself can;t produce previously
>>> unknown algorithm.
>>
>> Obviously, an algorithm whose purpose isn't algorithm invention doesn't
>> invent algorithms.
>>
>> Genetic programming is an concrete example of algorithms inventing
>> algorithms.
>
>Genetic programming ? Could you provide example algorithm
>creating algorithm, that is not in no way encoded in that algorithm?
>
>>
>>> This is mathematical fact....
>>
>> ROFL.
>
>?
>
>Greets
--
The Java folks had a great idea with JavaBeans, but they failed at
implementing it in a consistent and useful way. There is no language
level support for property-change listeners, so it becomes extremely
burdensome trying to create a functional bean which is useful as a GUI
model.
Don't get me wrong, I'm primarily a Java programmer. I just think that
the Bean hype has caused many people to forget basic abstraction
concepts, and the benefits of Beans have yet to been fully realized.
--
Daniel Pitts' Tech Blog: <http://virtualinfinity.net/wordpress/>
Well, I think Sun stretched itself too much creaing ALL sorts of
gadgets, "toolkits", "subsystems" and you name it.
As a result, they spread themselves too thin, worked in too many
different directions and finally, boom, the biggest tragedy in the
sw business, a war with Microsoft, which simply killed Java.
As of this moment, the traffic on MFC sites is twice as much as on
C++ sites and at least 5 times as much as on Java sites.
And Java has contributed significantly by simplifying the language,
getting away from all these pointers, "by value", references and
all sorts of other complications in the language that end up creating
nothing but nightmare.
People often forget that developers have ALL sorts of things in their
mind. They don't need to remember another universe of things when
incoroprating some functioinality from some toolkit or some other
language complication that is voluminous.
Just switching from IDE world view, with its thousands of things
to remember, to debugger worldview, with its piles of things,
to language worldview, and on and on and on, down to switching
your mind to your email program worldivew, then your editor
worldview, which also has thousands of things to keep in mind
and its own concepts, symbols, keystroke sequences, codes,
maps, tables, languages, fonts, colors and on and on and on.
So, what happens that that you have to remember millions of
different things, hundreds ways to switch your mind into totally
different universe and its perspective.
So, when someone develops something, they think someone has
either time or interest in studying another bible sized book
and INSTANTLY be able to switch to another bible sized worldview.
That is what I mean when I say:
"We totally do not understand what information is".
Just a stone age view.
Meanwile, the load on the mind is simply immense.
Just look at things like Java generics or C++ templates,
or .net all sorts of things.
Does anybody think that someone is going to sit there for half
an hour and stare at all those hierogliphs when they look at
some method or class? How long is it going to take you to
develop anything worth even mentioning if you have to switch
your mind at this rate?
And now we have these design patterns with ALL sorts of side
effects. People look at design patterns as some kind of
revolutionary progress and try to stick them on to anything
they stick.
Just as I described before, I worked JSpider.
Very nice conceptually. It used the visitor pattern and the
whole program was totally async. When you try to spider the
web, using this thing the amount of things you have to do is
quite something to consider. There are ALL sorts of complications,
structures, objects, trees and you name it to be dynamically
constructed, updated, etc.
Try to debug this thing?
Well, I spent at least a week working with that code and trying
to extend it so it has much more powerful functionality and
much more flexible everything.
You can not even DEBUG that thing in async mode, because you
may get ANY async event coming from ANY of the threads,
simpltaneously accessing either some page or entire site.
As a result, you can not single step through this thing
and actually relate anything to anything else.
It was the worst nightmare I recall.
No wonder the guy, who originally wrote JSpider, just gave
up and did not maintain or develop it for the last 5 years
at least, despite the fact that it is the most powerful
spider I saw, at least in Java world, which is the ONLY
thing I am interested in.
Just being able to run my main app on Linux after my win
box was rooted and was unusable for a mont, beats all the
arguments I heard against Java hands down.
There was not even need to recompile it.
Sure, GUI did look quite differently and less pleasing then
on the win, but functionality was perfectly there.
And I see ALL sorts of design pattern that create such a
nightmare from debugging standpoint or even from the standpoint
of being able to quickly analyze your code withing a couple
of seconds in order to implement something or fix something,
that the overall net benefit is zero, if not negative.
But yes, it DOES look on paper as something "revolutionary",
something really "impressive".
As a mind trip that is.
Well, if you could put that design pattern in some comfortable box
so it sits there and once you have gone through pain of implementing
it, you don't have to worrry about it, then it would be a different
story.
But plenty of design patterns end up creating the async. code, and
they are such a pain on the neck to debug in a more or less complex
program, that I bet you can find more than handful of them that
actually help things and are like hands in glove at the end.
> >> >> But I'm having a small problem finishing it. Anyone know of a
> >> >> pattern for the above problem?
>
> >> > =A0 Sure, just implement it as a GPS.
>
> >> >http://en.wikipedia.org/wiki/General_Problem_Solver
>
> >> Hm, back in 1987. my math teacher showed us mathematical proof that
> >> algorithm for creating algorithms can;t possibly exist.
I'm dubious
> >> It is based on proof that algorithm for proofs can;t possibly exits
> >> , too...http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theore=
> >ms
one of the more abused results in mathematics. Comparable with
"Eienstein showed everything was relative" and "modern physics is only
just discovering what eastern philosophy has long known".
> >> That's why blue brain project is bound to fail.
> >> Because anything which is based on algorithm cannot
> >> be creative...
and CTT would then argue that nothing else can be either. Dis-proofs
of AI seem to assume their conclusion.
> >Interesting implications there. Almost brings a religious context to
> >the whole discussion.
you think!
> (That is, are humans "simple" chemical machines,
> >or do we possess a "soul"?) Suffice to say, you are greatly
> >simplifying the issues involved and jumping the gun.
>
> >This is already so off-topic, but I suggest reading some good books on
> >evolution by natural selection. Dawkin's The Greatest Show On Earth
> >does a remarkably good job describing evolution by natural selection,
> >specifically how evolution by natural selection may be the only known
> >natural process which creates information in a local open system, the
> >only process which creates information which is not intelligent
> >design. "The non-random survival of randomly varying replicators."
>
> >Throw on a couple good books of information theory and entropy for
> >good measure.
>
> How bout this one:
>
> "There are no closed systems. So the issue of entropy does not apply".
>
> >On the flip side, who ever proved that humans are "creative"? Or any
> >moreso than a really good computer AI (which has not yet been made)?
>
> AI is just a myth.
> How can you possibly create an ARTIFICIAN intelligence
> if you don't even know how natural, and that is biological,
> intelligence "works"?
surely the point is to find out? With your attitude we wouldn't have
steam engines or aeroplanes let alone computers.
> AI is simply trying to copycat that, which alredy exists
> in biological world.
assuming that is what they're doing, you say that like it's a bad
thing
--
Nick Keighley
>>>>>>>> But I'm having a small problem finishing it. Anyone know of a
>>>>>>>> pattern for the above problem?
>>>>>>> Sure, just implement it as a GPS.
>>>>>>> http://en.wikipedia.org/wiki/General_Problem_Solver
>>>>>> Hm, back in 1987. my math teacher showed us mathematical proof that
>>>>>> algorithm for creating algorithms can;t possibly exist.
>>>>>> It is based on proof that algorithm for proofs can;t possibly exits,
>>>>>> That's why blue brain project is bound to fail.
>>http://www.fact-index.com/m/ma/mathematical_logic.html
>>http://www.fact-index.com/s/se/second_order_logic.html
>Enough.
So, everybody had a chance to talk their brains out on "design patterns".
And here is the scoop from my end of the wire:
Design patterns are not some recepies of God telling you "how it is"
in Reality.
They are ideas by some hopefully creative people that discovered
certain things, and those things are:
1) Optimization
Optimization is a broad issue.
You can optimize for performance.
You can optimize for code size.
You can optimize for minimum number of methods,
thus assuring the max. generality of code.
2) System structure
System structure is highly complex issue.
Basically, the MAJOR principles, the #1 criteria in system
is STABILITY.
If your program, which is a system, ever breaks,
than you may end up with a nuclear disaster,
just for the sake of argument.
Performance IS important, but only to the extent that it
does not affect the #1 criteria, stability.
System needs to be structured to minimize the complexity
of interactions. The more components and subcomponents
the system has, the more complexity results, and complexity
of ALL sorts, such as the number of interactions, the path
length to interact (the more steps you have to perform,
the longer is your path, the more probability of increased
complexity).
This is basically a beauty domain.
The simplicity.
Well designed system has the minimal number of components
and the minimal amount of interactions.
3) Reusability
Reusability is anotother way to describe generality or
universality of some component or subsystem.
It indirectly translates into portability as a side effect.
In a well designed system, some component or subsystem
could be used for multiple purposes by adding a thin
layer above it.
So, the fundamental premise behind the "design patterns"
is to maximize the system "correctness".
What is "correct" system?
Well, it is a mathematical issue.
In formal terms, it requires proof that your program logic
will perform as advertized under ANY conditions whatsoever.
To prove that system is "correct" in terms of mathematics
is FAR from being a trivial task.
It is interesting to note the fact that beyond an idea of
a semaphore that has been proven to be formally correct
by Digjstra, there exists no proof for more complex
systems to the best of my knwledge.
The complexity of problem is just immense.
Why are we leaning toward this train of thought?
Well, because when you evaluate and applicability of some
design "pattern", you have to watch really carefully what
does it buy you.
There are ALL sorts of issues in a system design, and,
especially in the modern and hecktic times,
you need to consider the information overload factor.
What is information overload factor?
It is a very interesting issue and higly complex.
One of the aspects of it is the fact that we are subjected
to tremendous amounts of information jamming our brains
almost anywhere we look.
In the software business, most of the time, people designing
something do not even bother to consider how consistent their
system is in terms of being similar conceptually with that,
which is well known and established at the moment.
So, they keep hacking some "new" ways of looking at things.
Take for example the user interface issue.
Have you noticed that most of the programs out there do not
present you with consistent user interface. They simply
invent all sorts of buttons, functionality, tabs, etc. as
THEY see fit, without even considering the fact that the user
may have to deal with hundreds of different programs, each
having thousands of different parameters, notions, functions,
buttons and you name it.
Each program simply exists in its own artificially created
world without even bothering that the user's mind can not
possibly switch from one world view to another with a flip
of the finger. His mind has to literally switch to another
universe with thousands if not millions of ALL sorts of
parameters, ways of looking at essentially the same things, etc.
One very interesting thing was the idea of Microsoft on
consistent GUI design they started advocating nearly a
generation ago.
The idea was essentially this:
Since the the most general level of GUI with any program
is the menu bar, you need to design it in a consistent manner.
Each program, no matter what it does, should have following
menu items:
1) File
2) Edit
3) Help
I do not remember exactly the details, but the idea still holds.
Basically, what it means is that your mind can easily switch
while using different programs because your basic menu categories
are logically similar, so the mind does not have to RADICALLY
switch from one world view to another, so different, that it
basically has nothing in common with the first one.
But what does it have to do with design patterns?
Well, it happens to apply to the idea of design patterns
just as well. How?
First of all, why do you need some "design pattern"?
Don't you have your own brain to structure the system
in the most optimal way?
What IS the design pattern?
Is it some kind of a pill against stupidity?
Is it some kind of recepie to "enlightenment" or way to heaven?
Nope.
Take for example an idea of an interface as such.
What is an interface?
Well, simple enough.
It is simply a way to communicate between systems that
minimizes the amount of different ways of looking at different
things by providing the most common parameters.
Secondly, it shields you from knowing the internals
of the other system you are trying to communicate with.
So, if you perform some operation, there is a requred
minimum of parameters to pass regarless of what kind of
operation are you going to perform.
That is basically an optimization issue.
Secondly, you can achieve a "code reuse" concept.
If system is architectured in general enough way,
then each component of that system may perform the necessary
operations in multiple situations and produce the correct
result REGARLESS of who is the client or consumer of that
operation. That is basically all there is to it.
Poorly designed system has a separate piece of code to
perform a similar operation but in a slightly differnt
context. As a result, you are basically dealing with
"special cases" no matter where you look in your system.
Another aspect of "design patterns" is the aspect that
ties in directly with the ability to very easily comprehend
some operation or a set of parameters and to be able to
understand a different piece of code. The information
overload issue.
In a well designed system, you should be able to look at
any piece of code and be able to understand what it does
within seconds. Nowadays, THAT is the time frame.
We no longer can afford switching our minds from one world
view to another, with its thousands or even millions of
different parameters and points of views.
The switch has to be smooth and easys.
That means things to be consistent.
There is a limited logical number of things that may
happen in ANY system, no matter what it does.
So, when you look at a totally different part of your system,
you don't want to have to switch your mind from Bible to
Coran and Yoga or Exestantialism.
You'll simply go crazy one day if you do these things
hundreds if not thousands of times a day.
You have to have some solid ground to stand on.
No wonder Jesus said:
"House that was built on the sand is bound to fall".
Correct.
Add the debugging aspect.
Since it is not possible to write the "correct" programs
from the very first attempt, no matter what smart guy tells you,
you would have to debug your code.
So, when you write that code, you need to be aware of
two fundamentally different approaches.
1) Syncronous mode
2) Asynchronous mode
In synchronous mode everything is simple.
Because you can follow each step of what your program does.
It is all sequential. One thing logically follows the other.
In async mode, it is a totally different world.
You can hardly single step it.
You'll come to some point where one component of your
system with perofm and write or send operation and that's it.
You won't be able to see the result of that write in
a syncronous manner.
Some event will occur and will trigger some other method
to handle the input or a result of your write.
Considering the fact that modern programs are mostly
multithreaded, the task of debugging becomes immense
if not monumental.
But what does it have to do with design patterns?
Well, that means when you have some issue to resolve,
yes, you CAN recall or review some "design pattern",
helping you to solve it.
But be very careful to consider ALL sorts of issues,
such as:
1) Information overload
2) Ability to read and understand the code fast, within seconds.
3) Ability to debug this thing
4) Ability to log this thing so it could be easily fixed.
If your design pattern exhibits the asyn behavior that is
not logically necessary, you may have a hell of a time
debugging or reviewing the log information.
5) What does that design pattern buy you at the end?
What issues does it solve?
What impact on system stability and performance does it have?
Get the drift?
:--}
--
Programmer's Goldmine collections:
Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript, PHP,
I believe your valid argument goes something like this:
definition 1- Creativity is the ability to create an algorithm to
solve any solvable problem.
premise 2- Various correct proofs demonstrate that no algorithm is
"creative".
premise 3- Humans are creative.
--
conclusion 4- Therefore, human consciousness does not operate on an
algorithm.
Your argument is valid; that is its conclusion follows logically from
its premises. However, it is not sound: some of your premises are
false, and thus we cannot conclude that the conclusion is true.
Definition 1. I might argue over the definition of creative, but let's
just go with your definition.
Premise 2 is true (under your definition of creative).
Premise 3 is not true (under your definition of creative). At least,
you have yet to convince me that it is true. Put another way, yes we
have proofs that a general problem solver is impossible, and that the
halting problem cannot be solved by a turing machine. However, we
still have Maple, and other pseudo general (math) problem solvers. My
calculator can still solve most / all calculus equations I encountered
in high school, and my human brain can still solve most problems put
to it (given enough time). However, I do not believe that I could
solve every solvable problem, nor do I believe that I could determine
if some turing machine would halt for every possible turing machine
(ignoring time restraints).
"There are no problems to be solved.
There only misteries in life".
>premise 2- Various correct proofs demonstrate that no algorithm is
>"creative".
>premise 3- Humans are creative.
--
I was wondering how we sorted problems into "solvable" and
"unsolvable"...
> Premise 2 is true (under your definition of creative).
>
> Premise 3 is not true (under your definition of creative). At least,
> you have yet to convince me that it is true.
I was trying to remember where I first came across the godel argument
"disproving" AI (Weinburg?). It sounded BS then and it sounds BS now.
p1) machines must operate by a fixed algorithm
p1a) and hence are bound by godels result.
p2) people do not have to operate by fixed algorithm and hence are not
bound by godels result.
conclusion: people can do things machines can't do
well, woopy doop. I didn't accept p1 and p2 originally. Now I'm not
convinced p1a is even applicable.
It's like winning a race by disqualifying the other contestants.
Hm, this is very OFF TOPIC, but p1 is false, and p1a is meaningless (it doesn't
follow even if p1 were true, it's a category error). p2 is meaningless.
Roger Penrose, the inventor of the above, is a genius (e.g. Penrose tiles, his
work with Hawkings, etc.), but he is also utterly mad -- like (at least) 89%
of the US population, 10% of US scientists, and about 65% of Middle East
scientists. Blaise Pascal was, I think, another example of the kind. Just
different religious issues.
Cheers & hth.,
- Alf
:--}
--
Ok, fine. I'll speak, no matter how hopeless it is.
It is not HIM who is "mad".
It is YOU.
Why?
Well because YOU do not claim your own being,
and YOU do not allow the expression of it,
being forever afraid to go against the herd
because great fear arises in you.
The fear of being condemned by others,
just as you condemn him in this very post.
But you know what?
As "mad" as he is,
his life is a life of a diamond
compared to your utterly gray existence.
MAD?
WHO?
You MUST be mad to waste your life like this
and do not claim all the grandior of it!
:--}
Enough?
Or you want more?
:--}
The mothership is FULLY loaded...
p1: is false : machines have to operate on algorithm,
since there is no algorithm for creativity...
follows that
p2 :) people's creativity is not based on algorithm...
>
> Roger Penrose, the inventor of the above, is a genius (e.g. Penrose
> tiles, his work with Hawkings, etc.), but he is also utterly mad --
No Penrose, just found all teachers of mathematical logic
new before him since 30's...
Greets!
You may continue to claim by fiat that people are creative, and that
creativity is the ability to create an algorithm to solve any solvable
problem. I will continue to note that you are claiming this by fiat,
and I will continue to disagree that people can create a solution
algorithm for any solvable problem. At least, I will continue to doubt
it until presented with some argument which is not fiat.
Well, why not ask what is creativity to begin with?
By definition, it has to deal with something new.
Now, since it is new, there exists no information about it
in the system at the moment.
If there does not exists any information about it,
how can you possibly find a way to create something,
that you can not even describe in your present terms?
Algorithms refer to things that are known and describable.
If you throw some totally uknown thing at an algorithm,
how does it "know" where to plug it in, into what table
or what description of what? What category it belongs to?
Under what label do you stick it in? What does it realte to?
So, there is an inherent contradiction of adding something
that does not exist in the system yet. One one end, what you
have created is valid and new, and on the other hand, you
don't know where to plug it in.
Creativity is based on intuition and not a set of known "facts".
Intuition is the inherent ability of Intelligence to go beyond
all known "facts" and limitations associated with them.
It is largely contextual, even though revelations are not.
Intuition "works" when you are willing and courageous enough
to tune into things you don't know. It is a great trust.
Trust into validity of your own being.
Trust that you can go beyond your limitations
and it will be revealed to you eventually
when you are mature enough and ready enough to even allow
such a thing to enter into domain of that, which you already
know.
There is an argument in AI that Intelligence is something that
is created via randomness. You just flip some coin, and bang,
you created something new, sooner or later, which is utter
fallacy.
Intelligence is not based on some random thing. It is very
directed, very contextual, forever staying within the bounds
of existing information. Plus allowing a totally new information
to enter, thus expanding the domain of known.
Intuition is an opening within you that allows you to communicate
with domains of beyond.
Most of the authors of great discoveries claimed
"it was not me, who created this thing.
It was given to me."
Given to you by WHOM?
Algorithms, on other hand, can not deal with totally new information.
It is basically an exception situation.
When exception happens, it is obvious, you can not continue
processing something because there exists no logic within
your system to handle it.
At THAT point, all you can do is try to either restart some
operation, hoping that some wild parameter is going to go away
like in situations where you lost a network connection for
totally unknow reason, or rather a reason, not described by
your system, or abandon the whole operation and go to the next
item on the list if there is one.
The bottom line is intelligence is not algorithmic.
Never was and can not possibly be.
The very evidence of AI shows it in no uncertain terms.
Marvin Minsky very pointedly stated that all the great "progress"
and "achievements" of AI are so primitive that they basically
view intelligence in terms of ideas of backlash and hysterisis
level.
We simply have no idea what Intelligence is.
We simply have no clue what consciousness is,
without which, no intelligence is possible
pretty much by definition.
Everything that was "discovered" in AI field is nothing more
than immitation of the existing Intelligence, which is biological
in the physical domain at least, the domain of manifest and
embodied into matter.
There exists no algorithm or method to discover something new.
It is not a matter of random permutations of known things,
as of necessity.
We are profoundly limited by the very nature of physical domain,
forever groping in the darkness to see some opening, some light
"at the end of tunnel" and forever enjoying the tremendous
blessings of seeing beyond the ordinary, beyond known,
beyond the limitations of that which we know.
If it were algorithmic, we could just keep throwing the random
numbers at some algorithm and it would eventually find something
new, except it would not even know where to stick it, under
which category of what.
The idea of AI is fundamentally flawed.
There is no such a thing.
ALL we know of is biological intelligence
and that is the only reference framework available to us
in the physical domain.
>> Roger Penrose, the inventor of the above, is a genius (e.g. Penrose
>> tiles, his work with Hawkings, etc.), but he is also utterly mad --
>
>No Penrose, just found all teachers of mathematical logic
>new before him since 30's...
:--}
Cool. I wish I knew what you guys are talking about.
>Greets!
Well, when you deal with issues of nothing less than Intuition,
good luck.
Just one point on this:
"The intuition is the pattern-matching process that quickly
suggests feasible courses of action."
Not true. There is no analysis in intuition.
It is an instantaneous REVELATION,
totally discontinuous as far as state of the system goes.
"The analysis is the mental simulation, a conscious and deliberate review of
the courses of action".
There is no analysis. It does not apply.
You can not possibly have a "course of action" towards somehing,
you do not know yet. Action is forever directed. Directed towards
something you already know one way or the other. You have to have
some "goal". It has to exist somewhere.
Otherwise, where are you going towards.
What is your direction?
And the only direction I know of is direction of Truth,
THAT ... WHICH ... IS.
That is all you can do...
Forever trying to discover that, which already is,
even though not in the domain where you are in at teh moment.
I'm not sure "mad" is quite the word to use, but for someone so smart to
screw up his argument in such an obvious way certainly suggests he has
Issues (typically this ends up being religious or some other existential
insecurity).
-Miles
--
o The existentialist, not having a pillow, goes everywhere with the book by
Sullivan, _I am going to spit on your graves_.