Thead A is my main working thread. This thread his waiting on 2 events :
1- Quit Event
2- Optional callback call Event
This thread is calling a callback function on every WaitForMultipleObjects()
timeout, here 5000 ms.
Thread B is an optional thread that can be enable/disable at anytime. This
thread his waiting only a quit Event and when WaitForSingleObject() timeout
it is setting the Optional Event of Thread A via SetEvent(). Timeout here
is 15 000 ms.
Each Thread are calling AfxEndThread(0,FALSE); at the end and the control
function is waiting on A->m_hThread and/or B->m_hThread before deleting
their respective object.
Now, if I am not enabling thread B. I can start and end Thread A without
any issue. If I start both thread A and B and I can also quit them without
problem if they were both running. Now , If I start both thread A and B and
stopping thread B and waiting a 10 seconds when I will try to stop thread A
the WaitForSingleObject() on his handle will deadlock.
I have found out that it is related with the event I am using for telling
thread A to execute the optional callback. If I simply put the SetEvent()
in comment, the problem never occurs.
Any idea, why this is happening?
Thank you
"Keaven Pineau" <keavenpineau...@videotron.ca-no-more-spam> wrote
in message news:e2v2KMCr...@TK2MSFTNGP05.phx.gbl...
>Hello all,
>I did a dialog application with an utility class with 2 working threads in
>it that are calling callback functions of the xxxdlg class.
>
>Thead A is my main working thread. This thread his waiting on 2 events :
>1- Quit Event
>2- Optional callback call Event
>
>This thread is calling a callback function on every WaitForMultipleObjects()
>timeout, here 5000 ms.
>
>Thread B is an optional thread that can be enable/disable at anytime. This
>thread his waiting only a quit Event and when WaitForSingleObject() timeout
>it is setting the Optional Event of Thread A via SetEvent(). Timeout here
>is 15 000 ms.
****
Instead of giving an English explanation, why not show the code? It would help in the
analysis. What you say it is doing and what it is really doing might differ, and I can't
analyze a problem like this without seeing the code
****
>
>Each Thread are calling AfxEndThread(0,FALSE); at the end and the control
>function is waiting on A->m_hThread and/or B->m_hThread before deleting
>their respective object.
****
ALWAYS consider the use of AfxEndThread as a coding error! You have NO IDEA what is going
to get lost when you call that. NEVER use it. Under any conditions. There is exactly
ONE correct way to terminate a thread, and that is returning from the top-level thread
function. So you arrange your code so that is how you terminate the thread.
I have no idea what is going to happen in your code if you call AfxEndThread, but it is
reasonably safe to assume that it is nothing pleasant.
You did not indicate if you have created your threads with the CREATE_SUSPENDED flag and
set the m_bAutoDelete flag FALSE. You have not shown the code that does the wait. There
is no possible way to do this analysis since all the crucial information (such as the code
that does the thread creation and the WaitFors) is omitted from the description.
****
>
>Now, if I am not enabling thread B. I can start and end Thread A without
>any issue. If I start both thread A and B and I can also quit them without
>problem if they were both running. Now , If I start both thread A and B and
>stopping thread B and waiting a 10 seconds when I will try to stop thread A
>the WaitForSingleObject() on his handle will deadlock.
****
Threads are not enabled. Threads are suspended, running, or blocked.
And since there is no code shown, and no way to tell what sequencing is going on without
it, no analysis can be performed.
You have not indicated in any way if you set breakpoints, nor have you indicated where
each of the threads is executing when you see this "deadlock". Why have you not supplied
this utterly critical information?
Do a Debug>Break All, then use the Debug>Threads to look at the call stack of each of the
deadlocked threads; show the call stack for each thread, and when the thread is in your
code, show the source code that is executing.
Other than omitting everything required to do the analysis, there's nothing wrong with
this question.
****
>
>I have found out that it is related with the event I am using for telling
>thread A to execute the optional callback. If I simply put the SetEvent()
>in comment, the problem never occurs.
****
Callbacks are dangerous as a way of life, especially in C++. Try to avoid ever using
them. The are an old C hack, rarely, if ever, valid in C++.
****
>
>Any idea, why this is happening?
****
Show all relevant code and the stack backtraces when it hangs, and there's a chance
someone could do an analysis. As the question stands right now, there's nothing more than
vague and, as far as I'm concerned, only semi-coherent descriptions of what might be going
on under some ill-defined set of circumstances.
joe
****
>
>Thank you
Joseph M. Newcomer [MVP]
email: newc...@flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
For example, your question is about the behavior related to
SetEvent(), well, maybe understanding how the internal member event
handles are created and under what initial state and signaling reset
behavior will help. Here is what it has in the MFC\SRC\THRDCORE.CPP
source code:
startup.hEvent = ::CreateEvent(NULL, TRUE, FALSE, NULL);
startup.hEvent2 = ::CreateEvent(NULL, TRUE, FALSE, NULL);
They both have a bManualReset and a bInitialState setting of TRUE and
FALSE, respectively.
The question is;
Does that make SENSE to you for your synchronization
design logic and needs?
Read what MSDN says about bManualReset parameter says about TRUE.
If TRUE, then you must use the ResetEvent() function to
manually reset the state to nonsignaled.
Thats that make sense to you?
One other thing, if your secondary threads don't need the "overhead"
associated with CWinThread that I would consider going direct
(CreateThread, or _BeginThreadEx) to have full control of the situation.
--
HLS
> ****
>> I have found out that it is related with the event I am using for telling
>> thread A to execute the optional callback. If I simply put the SetEvent()
>> in comment, the problem never occurs.
> ****
> Callbacks are dangerous as a way of life, especially in C++. Try to avoid ever using
> them. The are an old C hack, rarely, if ever, valid in C++.
-1.
--
HLS
****
However, if you examine the use of most callbacks, they are just C hacks transformed to
C++, usually badly. For example, how many callbacks really carry a "user-defined" value
with them? Even Microsoft totally screwed this up, and in 20+ years, has not fixed it
with some of the enum callbacks, making them impossible to use in C++. I did not say
"avoid them", I said "try to avoid ever using them". That's because there are often much
better approaches to the problem than a callback. It was reflexive for C programmers to
toss a callback into the mix whenever they felt like it; it is less neccessary in C++, and
is often best handled by passing in an object of an abstract superclass with a pure
virtual method defined by the DLL interface, except what is passed in is actually an
instance of the derived class whose virtual method is specified. While this
sort-of-looks-like a callback, it is philosophically quite different from the C hack, and
is a cleaner solution in the C++ world.
joe
****
Giovanni
"Joseph M. Newcomer" <newc...@flounder.com> ha scritto nel messaggio
news:65mdn51ti6sm1k2vf...@4ax.com...
-1 :)
In general, IPC DESIGN is independent of language, although some
language lend themselves to IPC designs.
If you look at most communications applications, its all callbacks at
the lowest levels. I don't care if you wrap it with a class, events,
messages, yada, yada, yada, it generally begins with callbacks. You
might not have to do it yourself at the layman application level, but
there is a CALLBACK in there somewhere prepared for you. We do this
for our developers, we have a low level call back RPC Client/server
framework. The RPC clients prepare the callback and they receive RPC
server event signals. We also prepared Message and Event based layers
for those who want to use this. For these developers, it looks like
its not a callback, but a "OnXXXXX" event. We even use interfaces and
properties, so that we you drop the control on your form, you have
your property window showing all the OnXXXXX events so that the GUI
automatically create the stub code when you click it, so on and so on.
But even with all this, its all callbacks, and most IPC begin with
callbacks in the same manner, including sockets, MFC as well!
I beg you not to begin arguing this. Not necessary. I only will
agree that for the layman programmers, today, they are most use to
seeing OnXXXXX and messaging style development. Using callbacks are
generally at the system level or layer below messaging.
--
HLS
But to get into the theory of callbacks and whether its bad or good,
well, thats a waste of time to be discussing.
So -1 from that standpoint. :)
--
Giovanni Dicanio wrote:
--
HLS
****
IPC == Inter-Process Communication. I see no IPC here; I see subroutine calls, within a
single thread, to a library component. So don't divert the discussion by bringing in an
unrelated topic.
****
>
>If you look at most communications applications, its all callbacks at
>the lowest levels. I don't care if you wrap it with a class, events,
>messages, yada, yada, yada, it generally begins with callbacks. You
>might not have to do it yourself at the layman application level, but
>there is a CALLBACK in there somewhere prepared for you.
****
But with the right layering, it can be clean and elegant, instead of the hack of passing
in a function pointer in the C sense.
Did you know that languages like Ada forbade passing function pointers, and instead
provided cleaner mechanisms for achieving the same goal?
Just because something is possible, or because there is a low-level implementation detail
that makes something work, does not mean that doing it is good, or exposing that low-level
detail as a first-class concept in the language is good design.
****
>We do this
>for our developers, we have a low level call back RPC Client/server
>framework. The RPC clients prepare the callback and they receive RPC
>server event signals. We also prepared Message and Event based layers
>for those who want to use this. For these developers, it looks like
>its not a callback, but a "OnXXXXX" event. We even use interfaces and
>properties, so that we you drop the control on your form, you have
>your property window showing all the OnXXXXX events so that the GUI
>automatically create the stub code when you click it, so on and so on.
>But even with all this, its all callbacks, and most IPC begin with
>callbacks in the same manner, including sockets, MFC as well!
****
Note that MFC does it by calling virtual methods of a superclass (CAsyncSocket), so there
is no visible "callback" mechanism involved. MFC message maps are another interesting
layer on what is the callback concept, somewhat less clean than virtual methods, but not
as ugly as passing function pointers to subroutines.
Note that the class "callback" isn't very clean in a lot of ways; for example, some
subroutine libraries store the callback address in a static variable inside the code, so
if you need to have multiple callbacks, possibly in multiple threads, you get into deep
trouble. The abstraction of the virtual method on an object instance allows the
capability of the callback without the unfortunate static binding that too often happens.
Callbacks can be packaged up into elegant abstractions, but to just pass a function
pointer in represents an assembly-code approach to the problem, when you want a high-level
abstraction (remember, C is the language which gives you the power of writing in assembly
code with all the expressive elegance of writing in assembly code).
Again, you are dragging IPC into a discussion where IPC was not involved. You can do
callbacks such as event sinks without exposing the ugliness of the raw callback mechanism.
****
>
>I beg you not to begin arguing this. Not necessary. I only will
>agree that for the layman programmers, today, they are most use to
>seeing OnXXXXX and messaging style development. Using callbacks are
>generally at the system level or layer below messaging.
>> In general, IPC DESIGN is independent of language, although some
>> language lend themselves to IPC designs.
> ****
> IPC == Inter-Process Communication. I see no IPC here;
Gawd, I knew you were going to say that!!
To communications people, The "I" can means both INTER and INTRA!
You are missing the boat here since this is so fundamental and to get
lost with statements that its a C hack, well, it really inviting yet
another worthless debate with you. Because of that I "try" will
refrain from further comment.
--
HLS
> Again, you are dragging IPC into a discussion where IPC was not involved. You can do
> callbacks such as event sinks without exposing the ugliness of the raw callback mechanism.
Ok, lets try to get this debate civilly. I don't care for history so
please try to refrain from personal opinions there, i.e. callback is a
C hack.
What do you consider is a raw callback mechanism?
Now, in the previous message, you stated that if it didn't provide a
user-define value, then YES, I will agree that a callback mechanism
that doesn't take into account:
1) Rentrancy
2) Provide for user-defined object/data access,
then yes, it is a poor implementation. I agree, and when dealing with
3rd party software with callback logic lacking the above, especially
#2, then absolutely, its problematic.
But that is an implementation issue, and not a language or "C Hack"
issue. You can say that the C++ layer provide a cleaner interface,
and I agree, but I will also note that these are generally based on a
lower level callback abstraction. In fact, I seem to recall dealing
with 3rd party software where it lacked a user-defined value and using
a C++ wrapper help resolved that by making the callback static in the
class, and combining it with TLS. I forget that project, but I do
recall going through those motions.
So it depends on what kind of design you are referring too. Writing
an C++ implementation is excellent and most of our stuff is written in
this way using interfaces. But I guess I have to wonder why we use
C++ in general. I think it because of its
- natural scoping capabilities,
- constructors and destructors,
- polymorphisms and interface.
The virtual interface, a large part, but not the only part.
Keep in mind you can duplicate all the above within pure C if you
provide the library code to do so.
--
HLS
>Joseph M. Newcomer wrote:
>
>> Again, you are dragging IPC into a discussion where IPC was not involved. You can do
>> callbacks such as event sinks without exposing the ugliness of the raw callback mechanism.
>
>
>Ok, lets try to get this debate civilly. I don't care for history so
>please try to refrain from personal opinions there, i.e. callback is a
>C hack.
****
Actually, a callback is an assembly code hack, translated into C. This is not an opinion,
it is a statement of fact. Callbacks at this level exist because the languages were
unable to provide a suitable abstraction, so instead of some clean mechanism, the old
let's-use-a-pointer technique was recast from assembler to C. It is a hack in that it is
simply a translation of a machine-level concept directly into a high-level language.
OTOH, the notion of virtual methods as a means of invoking operations is consistent with
the linguistic design of C++, and although it is done *exactly* by using a pointer to
redirect the call, it is done in a framework that is semantically consistent with a
high-level abstraction.
****
>
>What do you consider is a raw callback mechanism?
*****
call [eax]
substitute other register names, it is isomorphic to renaming. Expressed in C, it is
implemented by passing in a function pointer as a parameter to a call, with the purpose
that the specified function is called when there is a desire to invoke some operation. It
is limited to a single function in general.
*****
>
>Now, in the previous message, you stated that if it didn't provide a
>user-define value, then YES, I will agree that a callback mechanism
>that doesn't take into account:
>
> 1) Rentrancy
> 2) Provide for user-defined object/data access,
>
>then yes, it is a poor implementation. I agree, and when dealing with
>3rd party software with callback logic lacking the above, especially
>#2, then absolutely, its problematic.
****
Sadly, most 3rd party software fails in both the above. The Windows API has far too many
callback-style APIs that fail in the same way, including, inexcusably, some that were
added in either 2000 or XP, when the issues were well-known, but totally ignored.
****
>
>But that is an implementation issue, and not a language or "C Hack"
>issue. You can say that the C++ layer provide a cleaner interface,
>and I agree, but I will also note that these are generally based on a
>lower level callback abstraction. In fact, I seem to recall dealing
>with 3rd party software where it lacked a user-defined value and using
>a C++ wrapper help resolved that by making the callback static in the
>class, and combining it with TLS. I forget that project, but I do
>recall going through those motions.
*****
The issue is in confusing an interface that works directly and "in-your-face" with raw
function pointers, and one which uses the syntactic features of the language (like virtual
methods, a first-class concept) to disguise the implementation details and provide for a
degree of cleanliness and elegance. For example, compare try/catch/throw or _try/_except
to setjmp/longjmp as mechanisms. All are stack unwinders. But setjmp/longjmp is an
inelegant kludge compared to _try/_except, and neither will work in C++ because of the
need to invoke destructors as the stack unwinds.
Note that in what is now my 47th year as a programmer, I have used callbacks under all
kinds of conditions; I have grown to despise it as a way of life, particularly because it
is so easily misused and abused. I have written stack unwinders, I have implemented
try/catch mechanisms, I have implemented event notification systems, interrupt handlers,
and pretty much everything that is possible to do with pointers-to-functions. And I
prefer to use the C++ virtual function model. Note that the message map is actually a
poor kludge; in a sane world, they would have been virtual methods, but in 16-bit windows
the vtables would have gotten too large and the message map was invented as a more compact
representation of the abstraction. It might have even worked well if they had gotten just
a few more details right, such as always accepting the parameters passed to the superclass
(this is a failure of design and implementation of a callback mechanism!). So I've seen
all possible mistakes, and even made quite a few of them, particularly in my first ten
years or so of programming.
So I recognize the difference between a low-level hack and a methodology that is part of a
linguistically consistent framework.
I was shocked when Ada did not allow the passing of function pointers, wondering how it
was possible to specify callbacks. Then I realized the language had alternative
mechanisms that eliminated most of the need for user-specified callback functions
(internally, the generated code performed callbacks, but you didn't have to see that).
This was in the late 1970s, and since then I have realized that the raw callback is just a
continuing hack to get an effect that should be achievable in other ways. The C++ virtual
method (or C#, or Java virtual method) is one of these mechanisms. And there are those
who will argue that even virtual methods are a hack, and that embedding, using interfaces,
is the only way to go (e.g., COM, and the new Google GO language, which I have seen only
little bits of). Ultimately, we want to get away from the decades-old concepts (like the
computed GOTO) that were done to allow high-level programmers create constructs that
compiled into efficient code at the low level, and go for clean abstractions (which most
decent compilers can compile into incredibly good code at the low level, with no effort on
the part of the programmer. I used to say that in a good language with a good compiler,
you can write six levels of abstraction that compile into half an instruction. I've
worked with good languages and good compilers, and have done this, repeatedly).
It's too easy to get captivated by the implementation and forget that the implementation
details are best left to automated mechanisms. Compilers are really good at this sort of
grubby detail. At the lowest level, the implementation might be the instruction
call [eax]
but you should never have to think of this at the coding level. The construct
function(args)
where 'function' is actually a pointer is too close to the call [eax] to be really
comfortable.
Fortunately, the tools we have for MFC eliminate many of the visible details of how
message maps are actually dispatched. At the lowest level, it really is
call [eax]
(in fact, if you single-step through the AfxWndProc assembly code far enough, this is what
you will see, isomorphic to renaming of the register). But as an MFC programming, I have
a very high-level concept: add an event handler. I don't need to see the details of the
implementation. It just works. Well, sort-of-works, but I've already described the
problems there. Virtual methods, if you accept derivation as the way of creating new
classes, do the same job.
****
>
>So it depends on what kind of design you are referring too. Writing
>an C++ implementation is excellent and most of our stuff is written in
>this way using interfaces. But I guess I have to wonder why we use
>C++ in general. I think it because of its
>
> - natural scoping capabilities,
> - constructors and destructors,
> - polymorphisms and interface.
>
>The virtual interface, a large part, but not the only part.
>
>Keep in mind you can duplicate all the above within pure C if you
>provide the library code to do so.
****
And you can write in assembler, too, but it doesn't mean it is a good thing most of the
time.
I'm working on a course in assembly code, because there are still people who need to work
with it (yes, I was surprised, but the uses are legitimate). One of the surprises was the
number of people who need to write really tight, high-performance SIMD code (apparently
the x64 intrinsics don't produce particularly good code when they are used for this). But
it doesn't mean that people should write apps in assembler.
If my customer base accepted it, I'd be writing in C# or WPF, but they don't want this. In
fact, are opposed to it (I'm not sure I follow the reasoning, but they write the checks,
and I want to take their money). So I write in C++, and in a few cases in C (and I just
finished a library in C, several thousand lines of code, in which callbacks form a
particularly important part of the functionality. But I'd rather have done it in C++ and
just supplied a pure virtual method. You would not BELIEVE what I saw done there when I
got the library; it was a particularly ugly callback, well, I can't really call it
"design", and "kludge" gives it too much dignity, but as redesigned, it is essentially a
virtual method mechanism and therefore intellectually manageable, as well as handling a
ton of problems the old mechanism simply ignored). So I still use them, but they *should*
be avoided as a way of life in most coding. I nearly made all the complexity of the
earlier kludge disappear in the new design, which took major rework to get complete and
consistent. Doing a callback *right* isn't easy, and most people, I have found, take the
easy solution. If you assume that you should avoid them, then you use them only when
necessary, and ideally wrap enough syntactic sugar around them to make them go down easily
(e.g., virtual methods in C++).
joe
*****
Anyway, it would interesting to hear your critic on the design faults
of the human brain! :)
--
Joseph M. Newcomer wrote:
--
HLS
>
>Man, reading you, one has to wonder why the world has blown up yet or
> gotten this far. Everything was wrong, badly designed, hacked and no
>one ever used it right or differently. Its all one way with you. Just
>consider your inconsequential historical callback note had nothing to
>do with the OP issue or question, nor contributed to the problem. I'm
>sure until the code is posted, you would not exclude it as a
>possibility. I say its mostly likely unrelated.
****
What continues to amaze me is that ideas we knew were bad in 1970 keep getting re-invented
by another generation who doesn't understand why they were abandoned. We worked for
decades to improve the quality of programming, and here we are, in 2010, where the
state-of-the-art is stuck essentially at C, and the C fanatics wonder why we say there are
problems. There are problems because nobody actually pays attention to the past, looks at
past successes or failures, but just start, from scratch, reinventing the same bad ideas
over and over and over again. We just re-invented timesharing, which we realized by the
early 70s was a Bad Idea. Now we call it "cloud computing". Duh. For all the same
reasons that timesharing was bad in the 1970s, cloud computing is bad. So if I seem
overly cynical, remember that this is not the FIRST time I've seen bad ideas re-invented;
I've been around long enough to see most of them re-invented two or three times.
Approximately a generation apart.
Unfortunately, I'm not the kind of old codger who longs for the "good old days". The best
part of the good old days is that they are in the past, and we have grown beyond them. And
then someone comes along and tells me that the good old days were the best time, and the
ideas we tried and abandoned are essential to good software. I am skeptical of this.
Why aren't we programming in functional languages? Why do we even still have compilers
that run as separate preprocessors to execution? (Seriously: trace-based compilation
systems exist, and run, and are used every day, and here we sit with C and C++ and VB and
C# compilers, stuck in the punched-card model that I had abandoned by 1969, forty years
ago. C/C++ used to have a working edit-and-continue system until it was broken, and while
C# and VB make such a system trivial, they never seem to have had it. Duh. We've gone
backward since VS6, instead of forward.
Callbacks were a bad idea; we knew better how to handle this in the early 1970s. Look at
LISP closures, for example, and the large number of languages that managed to implement
closures by the 1980s. Mutex-style synchronization was dead by the end of the 1980s, but
not one of the good implementations of interthread synchronization made it to
commonly-used languages. So we see deadlock problems. Callbacks, by the way, introduce
problems in reasoning about program logic which makes reasoning about deadlock causes much
harder, so my observations are *not* irrelevant to the OP. I've been doing multithreading
since 1975 as a way of life, and was even doing multithreading back in 1968. And I
learned that explicit locking is usually a mistake. It is a hack to solve a problem that
should be solved by architecture, and the low-level lock we are familiar with, although it
needs to exist, should be completely invisible to the programmer (example: putting
elements in queues requires a lock at the lowest level, but I should never see that or
have to reason about it). People who worried about these issues have won Turing awards
(the computer profession's equivalent of the Nobel Prize) yet not a single one of their
ideas exists in our programming languages. The "synchronize" capabilities of Java and C#
are deeply flawed (a friend of mine just got his PhD for building a program that finds
synchronization errors in Java, and his comment is, "Everyone gets threading wrong all the
time. Start with that as your premise" and has the experience of examining, with his
program, over half a million lines of Java written by some of the best Java multithreading
experts to demonstrate that even they made serious errors).
So yes, we are, for all practical purposes, programming largely using 1970 technology,
except when we are using 1960 technology. In the 1980s, I was one of the founding
scientists of the Software Engineering Institute, and was examining the best-of-the-best
technology so we could figure out how to get it into the mainstream. The mainstream
didn't want it; they were content with 1960s technology because they were comfortable with
it. Learning new stuff, and better ways to do things, was not an acceptable agenda. I
left the SEI when I realized that (a) it had nothing to do with the actual *Engineering*
of software, but was concerned with the *management* of the process and (b) industry
didn't want to change what it was doing for any reason, no matter how cost-effective it
might be in the long run.
I really don't want to live in the past, and when I complain that yet again we are
replicating the technology of the 1950s and 1960s, somebody comes along to explain that it
is necessary we do so because there aren't better ways. There have *always* been better
ways.
Consider: JavaScript, the darling of AJAX, is just Simula-67 done badly. The heart of
AJAX, XML, was done better in both 1977 (in a project I was responsible for) and 1981 (in
a PhD dissertation done by a friend, an outgrowth of refining the problems we discovered
in the existing 1977 implementation). DTDs were designed by people who had no experience
designing languages or grammars (just ask any professional language designer. We still
have quite a few around, including a friend of mine who designed the Ada competitor).
Those of us in our 60s, who were there at the leading edges in the 1970s through 1990s,
look around and see that there has been very little progress made in forty years. What we
lament is not that the good old days are gone, but they are, alas, still with us.
joe
****
Do you think this is related to the faults of the human brain?
Do you think this is related to the fact we have yet to evolve
carbon-based chips?
Yes, I do agree things are reinvented with new generations. You see it
all the time to your bewilderment, especially in my market. My
daughter last year said "Dad, I have a great idea. What if we
consolidate all our email accounts with a single tool!" I said, "Like
Outlook?" She said "Huh?" So I showed her and a little dumbfounded
she said "But its not the WEB!" Not to discourage her entrepreneurial
spirit and fire in her belly, I helped her outline a business
proposal! Go West!
Yes, I do agree the SEI was a disappointment of its earlier promises.
I was there following the proposal, acceptance, building built,
ceremonies and people recruitment. Westinghouse was a big part of it,
and one of our think tank people went there, Dr ALAN Something. An AI
guru, he might be among those you were complaining about that shift it
to a management enterprise. I entered the think tank shortly before he
left, so I didn't know him that well, but I did take over some of his
responsibilities which included a card blanc to explore all new
computer technology, machines and languages at the time. Gee, I
remember reading an article in some magazine about something called
"HypeText." So I implemented an example of a Criminal Database lookup,
using a demo of a picture caricature of my boss and the meeting room
applauded with amazement and laughter! I was KING! Simple minded
people, I recall. But we certainly underestimated the potential of
hypertext. We brought in that pittsburgh startup "Knowledge Base
Something", you probably remember them, they developed a hypertext
database system. I recall the complexity, the slowness and saying
"But it doesn't work on a PC!!!"
Yes, there were all kinds of faults, and things could of been better,
but I guess like most of us, we complain more than we take action.
Anyway, every with all our faults, you and most of us in the industry
did seem to have gotten pretty far. :)
--
Joseph M. Newcomer wrote:
--
HLS
>
>First, a "silent WOW!"
>
>Do you think this is related to the faults of the human brain?
****
No, I think it is largely due to the fact that nobody asks the people who have been there,
done that, if an idea is a good idea or not. Or some nitwit goes off and builds a tool
that works precisely AGAINST common practice because he's never actually looked at what
real users do (and we get the VS IDE). The failures are failures to either understand
what the users need or to ask if this is really a good idea before plunging ahead and
doing it.
****
>
>Do you think this is related to the fact we have yet to evolve
>carbon-based chips?
****
There is a mythos about how fast the brain works. The brain is actually amazingly slow.
Generally, we do not think faster than we can talk. HOWEVER, the almost-holographic
retrieval system we have appears to run as a massively parallel search engine, something
yet to be achieved with computers, and the peculiar spark we call "insight" or
"imagination" or "the Aha! experience" has not been simulated on computers. But a "chip"
based on neural connections probably wouldn't work unless it weighed about 8 pounds and
had the 3 billion or so neurons that make up the human brain.
Read Kurzweil's ideas on silicon complexity. I don't believe most of it, because I think
the complexity is really in the interconnects, and we don't understand neural
interconnects yet. But I love when people start talking about "carbon-based chips",
because it usually requires that they start explaining what they mean, and pretty soon
we're back to the human brain again, with its amazing complexity.
****
>
>Yes, I do agree things are reinvented with new generations. You see it
>all the time to your bewilderment, especially in my market. My
>daughter last year said "Dad, I have a great idea. What if we
>consolidate all our email accounts with a single tool!" I said, "Like
>Outlook?" She said "Huh?" So I showed her and a little dumbfounded
>she said "But its not the WEB!" Not to discourage her entrepreneurial
>spirit and fire in her belly, I helped her outline a business
>proposal! Go West!
****
But Web-based email readers have been around for ten years...including ones that can read
from multiple servers. Turns out that they are really very complex email programs that
produce HTML for rendering...and don't forget the EMACS-based systems, which unified
everything. I once commented, after a number of disasters in interactive systems include
two that I built, "The only sane approach to building an interactive language is to build
a text editor which occasionally executes the text you are editing". EMACS does that.
****
>
>Yes, I do agree the SEI was a disappointment of its earlier promises.
> I was there following the proposal, acceptance, building built,
>ceremonies and people recruitment. Westinghouse was a big part of it,
>and one of our think tank people went there, Dr ALAN Something. An AI
>guru, he might be among those you were complaining about that shift it
>to a management enterprise.
****
No, it was Alan Newell, and he never believed in the shift to management. I know this,
because I knew him. But while he was part of the formation of the SEI, the incompetent
dweeb who took over ran it his own way, using cronyism as his hiring criterion, hiring
some of the singularly worst people in the known universe, who could spin a good story but
were really short on serious technical skills (this guy had an ego so huge that he
requested that the new SEI building have a helipad on top so he could be brought to work
every morning by helicopter, at the Air Force's expense. It was even in the plans for the
new building! After laughing hysterically (I suspect) the Air Force vetoed the plan and
no helipad was constructed. And his request for a chauffered limosine was treated with
the same disdain, and he was deeply offended ("I'm a National Resource!" he proclaimed).
And these are the funny stories. The sad stories are far worse, and the stories about how
his second-in-command had terrorized the staff to the point where they were afraid to
raise serious issues are a documentary of horrible management practice. He even tried to
terrorize me, and never forgave me for laughing at him, and pointing out that he was
totally powerless over me). So they *had* to turn it into a management institute because
not one of them was competent to write a line of code (the comment made privately to me by
a former employer of one of them: "at least he's not here, where he did real damage"), and
management appealed to non-technical managers. The military is full of unqualified
managers who like ideas that give them a secure basis to keep being unqualified managers
and shows them a path for promotion in three years. I watched it go downhill, and got out
because it was clearly going to be a waste of my time to stay; I'm an ENGINEER, and they
wanted nothing to do with engineering! One total idiot wasted a couple hours of the time
of over thirty people in a meeting where his key idea was to have 10 levels of scientists,
plus a category called "Institute scientist". When I asked the purpose of this, he said
"Well, if you are a class 5 scientist, you are not invited to meetings of the class 6
scientists unless they want you to make a presentation" (My response: "Wonderful! Just
what we need! A way to LIMIT communication! Wow!") His claim was that "When I was at
<XXXX> we had 1200 scientists and this was essential" and I finally walked out of the
meeting, after calling him a complete blithering idiot to his face, proclaiming that
anyone stupid enough to waste as much of our time as he had should consider alternate
career plans. I pointed out that we had *12* scientists! [He had just accused me of
being afraid that I would be a lower grade than he was, and I pointed out that since I was
already at the top of the heap in his proposed plan, it didn't matter to me, I just
thought the whole idea was a waste of time. "After all, according to your rule, an
institute scientist is someone with a PhD [which he didn't have] and ten years of academic
and industrial experience, and my PhD is 11 years old, encompassing 6 1/2 years of
academic experience and 3 1/2 years of industrial experience, plus a year here at the
SEI..."] Sadly, a lot of people who were at that meeting thought this was a Really Good
Idea, which shows the quality of the people we had employed at that point)
****
>I entered the think tank shortly before he
>left, so I didn't know him that well, but I did take over some of his
>responsibilities which included a card blanc to explore all new
>computer technology, machines and languages at the time. Gee, I
>remember reading an article in some magazine about something called
>"HypeText." So I implemented an example of a Criminal Database lookup,
>using a demo of a picture caricature of my boss and the meeting room
>applauded with amazement and laughter! I was KING! Simple minded
>people, I recall. But we certainly underestimated the potential of
>hypertext. We brought in that pittsburgh startup "Knowledge Base
>Something", you probably remember them, they developed a hypertext
>database system. I recall the complexity, the slowness and saying
>"But it doesn't work on a PC!!!"
****
It was KMS, Knowledge Management Systems, an offshoot and commercialization of the ZOG
hypertext project that we had at CMU in the late 1970s. I wrote my first hypertext
document, a user manual for the PQCC (Production Quality Compiler-Compiler) tooling (the
XML predecessor I alluded to) about 1979 or 1980, using ZOG. My first Web-like hypertext
product preceded Tim Berners-Lee's invention, but since it was for a small software
company, it didn't make the splash HTML made. The product was delivered in 1988 and had
1300 "Web" pages of hypertexted document in it. You could even click on the links! It
was implemented using SPRINT, which was Borland's Scribe-like formatting program, and that
represented our markup language (instead of HTML). [It doesn't help to be the first who
did something if everyone ignores you]. So my reaction when I first saw a Web page,
around 1991 or so, was "Gee, I did that three years ago..."].
KMS was formed to build the hypertext system that was used in the USS Carl Vinson, one of
the modern nuclear aircraft carriers. (To take an aircraft carrier to sea from ordinary
docking took three days of prep, thousands and thousands of steps that had been in massive
printed books). Robert Acksyn and Don McCracken were principals in that company; Don had
an office next to mine at CMU, and worked closely with Al Newell on the development of
ZOG, and was one of the key implementors of the system. It ran on a PERQ, which was a
computer developed by another CMU spin-off, Three Rivers Computer Corporation. It sorta
kinda ran. It was developed in the late 1970s, long before there were PCs. The New Kids
On The Block were Apollo Computer and Sun, both of whom used 68000-based workstations,
horrendously overpriced. But 3RCC couldn't build a reliable system, and lost out (the one
in my office took ten minutes to boot; it had to warm up to where the propagation delays,
based on the temperature coefficient of prop delay change, fell within the margins that
made the circuits work). It used a Pascal p-code compiler, not a very good compiler. The
Perq Operating System, POS, which was based on a single-threaded
single-application-running model much like the later MS-DOS, was, well, a POS. Sun was
running Unix, and Apollo had Aegis, which was Unix-like but much better. A PC, up until
Windows 3.0, didn't have the horsepower or architecture to run KMS well. X-Windows wasn't
around, either.
KMS was more than hypertext; it had a scripting language, imaging a forerunner of
JavaScript, behind it and links could activate code. Hard to implement on machines
without GUIs. The PERQ, for all its reliability faults, was an interesting
commercialization of the GUI-based ideas pioneered by Xerox PARC in the Alto and
subsequent machines. Apollo was GUI-based as well, but Sun's Unix machines were just Unix
machines with a model-33 teletype simulator and command line interface.
Al Newell was one of the brightest people that I have ever known. I enjoyed every minute
I spent with him, even the ones where he convinced me that the dissertation I was working
on was a complete waste of time and I needed three more years of cognitive psychology
before I could *begin* to tackle a problem that complex. So I went off and did a PhD on
optimizing compilers, and was one of the founders (there were about six of us, over a
period of just a couple years from about 1974 to 1979 or so) of a whole new subdiscipline
of compiler technology (although I had no idea at the time I was actually doing anything
more than just a cool AI-based approach to code generation). Al Newell gave me a very
hard time at my PhD defense, and I found out later he was trying to see if he could push
me to a breaking point. He couldn't, and later told me "good job". That was when I knew
I'd done something worthwhile.
He was one of the people who convinced me that understanding the user model was critical.
That the only important letter in "GUI" was the middle one. That understanding user
expectations was critical. That the element of surprise in an interface always indicated
a bad design. I learned about planning horizons from him, and that systems that had
distant planning horizons were inherently badly designed (e.g., compare VS6 with VS.NET;
VS6 had a near planning horizon; VS.NET has a distant one). That layout and accessibility
in interfaces mattered, that things that flashed but weren't important were distractions.
THat focus-of-attention, which is where the user is looking RIGHT NOW, is one of the most
important concepts of all in interface design. Workflow matters, and interfaces that work
against workflow are also inherently bad designs. All of this was based on well-founded
and deeply-understood principles of human perception. So my complaints about, say, Office
2007 or VS.NET are not random bitching that "it ain't the same", they are based on the
fact that these "improved" interface go directly against deeply-understood principles of
good interface design formulated over forty years of research by some of the best people
in the field. Or to put it another way, who would you trust as an interface designer: (a)
the Office 2007 Ribbon Bar team (b) the VS.NET IDE designer (c) Donald Norman? If you
answer anything other than (c), you deserve whatever crap interface you get.
The first captain of the Carl Vinson, Dick Martin, was later my division head at the SEI.
I still harbor the belief that he was the only competent manager in the building in that
period. He understood, possibly better than anyone I have worked for before or since, the
nature of technical management. I'd work for him again, any time. He understood how to
be demanding, how to be fair, how to lead, and how to treat technical people with respect.
The director of the SEI at that time, by contrast, thought that technical people,
especially those with PhDs, were a blight on the otherwise fine management-oriented SEI
team he had assembled, and never hesitated to let us know how low on the hierarchy we
were. His second-in-command, another ex-military type, couldn't stand the fact that we
didn't wear uniforms [or in my case, a jacket and tie. I have not worn a necktie since my
college graduation in 1967, which is also the last time I wore a suit] and tried to
establish an institutional dress code. My immediate manager reacted to this by coming to
work the day after the proposal was made, wearing the shoes, jeans, and T-shirt he'd worn
when re-tarring his garage roof. Normally, he wore a jacket and tie.
The SEI was, and for that matter (according to my inside sources) still is, an actively
hostile place to work. The employees, at best, have what I would characterize as an
abusive co-dependency relationship with what passes as management (as a former abusive
co-dependent in a management-employee relationship, I feel their pain). As far as I know,
there are no "alumni" of the place who look on it as the best years of their career;
rather, they are grateful for having gotten out with their sanity and without having
terminated their job by showing up at work with an assault rifle. Sadly, some of the
very, very best people in the field who went to the SEI left it when the abuse exceeded
their threshold. While I have not talked to every former SEI employee, I've know a LOT of
them, and they are uniform in their view that it was a horrid place to work, possibly the
worst place they had ever worked for, even when they actually accomplished something
technically solid that they were proud of.
****
Compared to what?
> But a "chip"
> based on neural connections probably wouldn't work unless it weighed about 8 pounds and
> had the 3 billion or so neurons that make up the human brain.
Got to start some where as did the early transistors.
> But Web-based email readers have been around for ten years...
Actually 1996 for us - Wildcat! Internet Net Server. 1996 Var
Business "Editor's Choice", PC WEEK, PC Computing "MVP, Best of the
Net" and InfoWorld.
> >Yes, I do agree the SEI was a disappointment of its earlier promises.
> > I was there following the proposal, acceptance, building built,
> >ceremonies and people recruitment. Westinghouse was a big part of it,
> >and one of our think tank people went there, Dr ALAN Something. An AI
> >guru, he might be among those you were complaining about that shift it
> >to a management enterprise.
>
> ****
> No, it was Alan Newell, and he never believed in the shift to management.
Last name doesn't ring a bell, but yes, Alan was more of a geek. If
he was the only "Alan" and came from Westinghouse, then that was him.
hmmm, he could of had a sabbatical with Westinghouse to help get the
then new AI/Venture group started. I'm sure it was him.
> I know this,
> because I knew him. But while he was part of the formation of the SEI, the incompetent
> dweeb who took over ran it his own way, using cronyism as his hiring criterion, hiring
> some of the singularly worst people in the known universe, who could spin a good story but
> were really short on serious technical skills (this guy had an ego so huge that he
> requested that the new SEI building have a helipad on top so he could be brought to work
> every morning by helicopter, at the Air Force's expense. It was even in the plans for the
> new building!
I understand the feeling. There was much of that going on in the AI/
Venture Group as well. I'm sure Circle W had a lot to say in SEI
direction since Defense was a big part of its funding and major part
of what we were doing. You have to remember you "Academic Guys" were
there for us. I was not too happy with the direction, missing all
sorts of opportunities. But if a new idea didn't bring in 100M annual,
it wasn't worth it. Of of the things the group "invented" was
Offshore Software Engineering using Indians. The basic idea was to
use american SE and have indians code it. SEI was to be part of the
plan, proposed to the World Bank to get the funding to help develop
3rd world countries, we invited Microsoft and Gates turned it down
(only to begin hiring green cards a year later). Idea shut down, one
of the Indian management cronies went on to develop the idea in our
Monroeville Mall offices and I remember seeing offices full of indians
when we working Reagans "Star Wars" projects - writing on the wall for
US programmers. I am still shamed by all that. One project I was in
charge of was preparing a online BBS for the corporation to help share
knowledge with all division. I helped the PCBOARD people with the
first BBS to run over X.25/PADS. To print your morning messages, I
helped co-write the "Emailator" one of the first offline mail reading
systems based on TapCIS. Another project was Optical Scanning of
military/legal documents with OCI capabilities. Didn't get the
"large" contracts, the project was killed. Three of us asked
permission to take it on our own. Getting corp permission, we quit
and started OptiSoft to build the OptiFile PC based system using the
advanced scanning/imaging board. It failed 1 year later, too long
sales time and too many competitors, I decided to go pure software (no
hardware) started Santronics Software, Inc to concentrated in offline
mail systems (OMS) for the blossoming telecomputing business. OMS
declined by 1990/93 as the cost of being online decreased. By 96, I
purchased the then #1 online hosting software/system in the world.
Ironically, OMS is making a comeback as AT&T and its offsprings are
once again charging a premium for online data access, especially for
mobile.
> ****
> It was KMS, Knowledge Management Systems,
That was it. I wasn't too impress. :)
> KMS was formed to build the hypertext system that was used in the USS Carl Vinson, one of
> the modern nuclear aircraft carriers.
Correct, Defense and Expert Systems was a large part of the working we
were doing.
> (To take an aircraft carrier to sea from ordinary
> docking took three days of prep, thousands and thousands of steps that had been in massive
> printed books).
It was more than that. The concern was the dieing breed of "expert'"
engineers, old farts that knew everything about the ships, subs,
planes, elevators, etc, dieing off (retiring) and the US would need
"Expert Systems" built by KB engineers who knew how to ask the right
questions and extract and "computertize" all the knowledge. One of
the goals was more diagnostic in nature using fuzzy logic code.
> The New Kids
> On The Block were Apollo Computer and Sun, both of whom used 68000-based workstations,
> horrendously overpriced.
Yup!
> A PC, up until Windows 3.0, didn't have the horsepower or architecture to run KMS well.
> X-Windows wasn't around, either.
There was a push to get a OS/2 version developed since that was the
primary "direction" and MS/IBM were in co-hoots. I got all the OS/2
compilers with draft docs! :)
This is prime example of where "LESS" was better when the OS/2 killer
- Windows 3.1 and VB was introduced :) Oh, do I remember how staffs
of 50 were cut down to 20! :)
>On Feb 14, 11:06 pm, Joseph M. Newcomer <newco...@flounder.com> wrote:
>> >Do you think this is related to the fact we have yet to evolve
>> >carbon-based chips?
>>
>> ****
>> There is a mythos about how fast the brain works. The brain is actually amazingly slow.
>
>Compared to what?
****
Oh, say, a 4-function calculator. Go read Lindsay & Norman "Human Information
Processing", the introductory book to cognitive psychology.
****
>
>> But a "chip"
>> based on neural connections probably wouldn't work unless it weighed about 8 pounds and
>> had the 3 billion or so neurons that make up the human brain.
>
>Got to start some where as did the early transistors.
****
The secret is not in the technology, but the interconnects
****
>
>> But Web-based email readers have been around for ten years...
>
>Actually 1996 for us - Wildcat! Internet Net Server. 1996 Var
>Business "Editor's Choice", PC WEEK, PC Computing "MVP, Best of the
>Net" and InfoWorld.
>
>
>> >Yes, I do agree the SEI was a disappointment of its earlier promises.
>> > I was there following the proposal, acceptance, building built,
>> >ceremonies and people recruitment. Westinghouse was a big part of it,
>> >and one of our think tank people went there, Dr ALAN Something. An AI
>> >guru, he might be among those you were complaining about that shift it
>> >to a management enterprise.
>>
>> ****
>> No, it was Alan Newell, and he never believed in the shift to management.
>
>Last name doesn't ring a bell, but yes, Alan was more of a geek. If
>he was the only "Alan" and came from Westinghouse, then that was him.
>hmmm, he could of had a sabbatical with Westinghouse to help get the
>then new AI/Venture group started. I'm sure it was him.
****
Al Newell was an AI guru, but as far as I know he had no deep relationship with
Westinghouse. He was a University Professor, a rank that is equivalent to being a
one-person academic department. We have a very small number of them, I think six or eight
in the whole University.
****
****
In 1991, right before Windows NT was released, I gave a "breakfast talk" to a roomful of
mainframe managers on the future of computing. Pretty much everything I predicted came
true, and in fact by 2000 most of my predictions, radical for 1991, had become
underestimations. But I said "There are three operating systems which will be interesting
to watch. The first is Unix, The Operating System Of The Future. We know it will be the
operating system of the future because we've been told that for the last 20 years. Then
there's OS/2, with its graphical interface. And there's Windows NT. Where would I put my
money? Well, Unix keeps promising to take over the desktop, but its incoherent,
fragmented, and proprietary-enhancement-variant nature works against that goal. So you
will always be stuck with one hardware vendor. There's OS/2, a system developed by the
company that created the personal computer revolution then ran away from it, an operating
system doomed because they refuse to upgrade it beyond the '286 chip, and for which they
charge $3,000 per seat for the development kit. And there's Windows NT, created by a
company that understood the personal computer revolution, understands where it is going,
and has corporate plans to be there wherever it goes. Windows NT was designed to be
portable, runs on several different platforms already, and has a development kit that is
for all practical purposes free. I bet on Windows NT. You should, too."
joe
>>> There is a mythos about how fast the brain works. The brain
>>> is actually amazingly slow.
>>
>> Compared to what?
>
> ****
> Oh, say, a 4-function calculator. Go read Lindsay & Norman "Human Information
> Processing", the introductory book to cognitive psychology.
> ****
hmmm, you mean the calculator (any computer) is faster at reaching a
1x0 conclusion than a human?
Well, not really the same analogy, is it? Your calculator or Cray
isn't going to do much good at intelligence and putting together
unrelated consequences. Now I'm thinking Query Dissemination Theory,
i.e, where you no longer calculating but *zooming* to a learned
solution. i.e. like entering 1x0 into your 4-func calculator once and
never have to do it again! In that vain, the calculator is a stupid
device and slower from typing and looking at the LED waiting for an
answer at getting to the answer 0 when you can do with no hands and
your eyes close, "almost" no thinking involved. Now, if you wish to
begin to emulate this behavior in the calculator, then give it a short
circuit for zero and other known conditions that will eliminate flip
flopping bits and not really do any "processing" at all. :)
--
HLS
Second, forgive me to not give you snippets and forgive me to come from a
Assembly/ C low level programming background...
To answer some of your questions.
1- No , I didn`t create my thread suspended from what I have read on MSDN it
didn`t look mandatory, I thought calling AfxEndThread(0, FALSE); was enough
to not delete the object automatically. Should I create the thread
suspended and change m_bAutoDelete to FALSE before running the thread
instead?
2- I used an auto-reset event because I wanted the event to be treated only
once. Using a manual reset event could get this section of code done
several times since I was using asynchronous threads.
Finaly, before showing you what I was doing in snippets. I will give what I
wanted to do. My main thread is used like ping on a serial communication,
the ping is sent each 5 seconds. The second thread is used as a seperate
timer (each minute) and notify the main thread to execute an optionnal
operation. The callbacks were used to give an update of the communication
status to the dialog box. If we want to do it the good way how should I
send the updated status to the dialog?
Here is the snippets of my utility class.
__________
bool CNWCManager::OpenConnection(int ComPort)
{
....
m_hThreadEvent = CreateEvent(NULL, FALSE, FALSE,
_T("KillUpdateLampThread"));
if(m_hThreadEvent != NULL)
{
//Start Thread
m_pUpdateThread = AfxBeginThread(UpdateThread,this);
if(m_pUpdateThread != NULL)
{
if(m_IsMeteringEnabled == true)
{
EnableMetering(true);
}
}
else
{
SetEvent(m_hThreadEvent);
if(WaitForSingleObject(m_pUpdateThread->m_hThread,INFINITE) ==
WAIT_OBJECT_0)
{
delete m_pUpdateThread;
}
}
}
bool CNWCManager::CloseConnection(void)
{
bool bResult = false;
//End Thread
if(m_hThreadEvent != NULL)
{
SetEvent(m_hThreadEvent);
if(WaitForSingleObject(m_pUpdateThread->m_hThread,INFINITE) ==
WAIT_OBJECT_0)
{
delete m_pUpdateThread;
m_pUpdateThread = NULL;
}
if(m_hMeteringThreadEvent != NULL)
{
SetEvent(m_hMeteringThreadEvent);
if(WaitForSingleObject(m_pMeteringThread->m_hThread,INFINITE) ==
WAIT_OBJECT_0)
{
delete m_pMeteringThread;
m_pMeteringThread = NULL;
}
}
bResult = true;
}
//Close COM port
CloseHandle(m_hComm);
return bResult;
}
UINT CNWCManager::UpdateThread(LPVOID pParam)
{
CNWCManager* pObject = (CNWCManager*) pParam;
HANDLE hEndThread = CreateEvent(NULL, FALSE, FALSE,
_T("KillUpdateLampThread"));
HANDLE hMeteringEvent = CreateEvent(NULL, FALSE, FALSE,
_T("MeteringEvent"));
HANDLE hEventArray[2];
hEventArray[0] = hEndThread;
hEventArray[1] = hMeteringEvent;
DWORD EventStatus;
bool ExitThread = false;
do
{
EventStatus = WaitForMultipleObjects(2, hEventArray, FALSE, 5000);
switch(EventStatus)
{
case WAIT_OBJECT_0:
ExitThread = true;
break;
case WAIT_OBJECT_0 + 1:
pObject->MeteringUpdate();
break;
case WAIT_TIMEOUT:
pObject->SendUpdate();
break;
case WAIT_FAILED:
ExitThread = false;
break;
default:
ExitThread = false;
break;
}
}
while(ExitThread == false);
CloseHandle(hEndThread);
CloseHandle(hMeteringEvent);
AfxEndThread(0, FALSE);
return 0;
}
UINT CNWCManager::MeteringTimerThread (LPVOID pParam)
{
CNWCManager* pObject = (CNWCManager*) pParam;
bool ExitThread = false;
DWORD EventStatus;
HANDLE hEndThread = CreateEvent(NULL, FALSE, FALSE,
_T("KillMeteringThread"));
HANDLE hMeteringEvent = CreateEvent(NULL, FALSE, FALSE,
_T("MeteringEvent"));
do
{
EventStatus = WaitForSingleObject(hEndThread, 60000);
switch(EventStatus)
{
case WAIT_OBJECT_0:
ExitThread = true;
break;
case WAIT_TIMEOUT:
SetEvent(hMeteringEvent
);
break;
case WAIT_FAILED:
ExitThread = false;
break;
default:
ExitThread = false;
break;
}
}
while(ExitThread == false);
CloseHandle(hEndThread);
AfxEndThread(0, FALSE);
return 0;
}
void CNWCManager::MeteringUpdate (void)
{
m_CommunicationMutex.Lock();
//Callback to dialog for updating the metering values
m_pCallback[1]->Execute(NULL);
m_CommunicationMutex.Unlock();
}
void CNWCManager::EnableMetering(bool bIsEnable)
{
//Create Timer Thread for metering
if(bIsEnable == true)
{
if(m_hMeteringThreadEvent == NULL)
{
m_hMeteringThreadEvent = CreateEvent(NULL, FALSE, FALSE,
_T("KillMeteringThread"));
}
if(m_pMeteringThread == NULL)
{
m_pMeteringThread = AfxBeginThread(MeteringTimerThread,this);
}
m_IsMeteringEnabled = true;
}
else //Kill Thread
{
if(m_hMeteringThreadEvent != NULL && m_pMeteringThread != NULL)
{
SetEvent(m_hMeteringThreadEvent);
if(WaitForSingleObject(m_pMeteringThread->m_hThread,INFINITE) ==
WAIT_OBJECT_0)
{
delete m_pMeteringThread;
m_pMeteringThread = NULL;
}
CloseHandle(m_hMeteringThreadEvent);
m_hMeteringThreadEvent = NULL;
m_IsMeteringEnabled = false;
}
}
}
in the threads debug window I have 3 threads, the main, CWnd:UpdateWindow()
and my MeteringTimerThread.
callstack for CWnd::UpdateWindow
ntdll.dll!76ef64f4()
[Frames below may be incorrect and/or missing, no symbols loaded for
ntdll.dll] user32.dll!77024341()
user32.dll!77022bfe()
> TLAC Demo.exe!CWnd::UpdateWindow() Line 142 + 0x39 bytes C++
TLAC Demo.exe!CFontStatic::RedrawFont() Line 259 C++
TLAC Demo.exe!CTLACDemoDlg::CallbackUpdateStatus(void * Param=0x015982b4)
Line 668 C++
TLAC Demo.exe!TCallback<CTLACDemoDlg>::Execute(void * Param=0x015982b4)
Line 30 + 0x1d bytes C++
TLAC Demo.exe!CNWCManager::SendUpdate() Line 399 + 0x1b bytes C++
TLAC Demo.exe!CNWCManager::UpdateThread(void * pParam=0x01598270) Line
210 C++
TLAC Demo.exe!_AfxThreadEntry(void * pParam=0x0012e654) Line 109 + 0xf
bytes C++
TLAC Demo.exe!_callthreadstartex() Line 348 + 0xf bytes C
TLAC Demo.exe!_threadstartex(void * ptd=0x0159a788) Line 331 C
kernel32.dll!76621194()
ntdll.dll!76f0b3f5()
ntdll.dll!76f0b3c8()
callstack for CNWCManager::MeteringTimerThread
[Frames below may be incorrect and/or missing, no symbols loaded for
ntdll.dll] ntdll.dll!76ef5e6c()
KernelBase.dll!750b179c()
kernel32.dll!7661f003()
kernel32.dll!7661efb2()
> TLAC Demo.exe!CNWCManager::MeteringTimerThread(void * pParam=0x01598270)
> Line 281 + 0x11 bytes C++
TLAC Demo.exe!_AfxThreadEntry(void * pParam=0x0012e688) Line 109 + 0xf
bytes C++
TLAC Demo.exe!_callthreadstartex() Line 348 + 0xf bytes C
TLAC Demo.exe!_threadstartex(void * ptd=0x01591cb0) Line 331 C
kernel32.dll!76621194()
ntdll.dll!76f0b3f5()
ntdll.dll!76f0b3c8()
______________
That is the essential of my class. The dialog calls openconnection and
closeconnection to start the threads and terminate them. EnableMetering is
used to activate the second thread for the optional MeteringUpdate () call
in UpdateThread();
Thank you.
Despite the fact that I found out your way to answer a bit harsh , I will
thank you to point me in a good direction to solve my problem.
Keaven
"Keaven Pineau" <keavenpineau...@videotron.ca-no-more-spam> wrote
in message news:e2v2KMCr...@TK2MSFTNGP05.phx.gbl...
>
>Joseph M. Newcomer wrote:
> >
>
>>>> There is a mythos about how fast the brain works. The brain
>
> >>> is actually amazingly slow.
> >>
>
>>> Compared to what?
>
> >
>
>> ****
>> Oh, say, a 4-function calculator. Go read Lindsay & Norman "Human Information
>> Processing", the introductory book to cognitive psychology.
>> ****
>
>
>hmmm, you mean the calculator (any computer) is faster at reaching a
>1x0 conclusion than a human?
****
Not sure what you mean, but if you mean "one multiplied by zero equals zero", of course. A
Pentium 4-class machine does this, in floating point, in about 350 picoseconds. That
isn't even close to neural propagation delays.
*****
>
>Well, not really the same analogy, is it? Your calculator or Cray
>isn't going to do much good at intelligence and putting together
>unrelated consequences. Now I'm thinking Query Dissemination Theory,
>i.e, where you no longer calculating but *zooming* to a learned
>solution. i.e. like entering 1x0 into your 4-func calculator once and
>never have to do it again! In that vain, the calculator is a stupid
>device and slower from typing and looking at the LED waiting for an
>answer at getting to the answer 0 when you can do with no hands and
>your eyes close, "almost" no thinking involved. Now, if you wish to
>begin to emulate this behavior in the calculator, then give it a short
>circuit for zero and other known conditions that will eliminate flip
>flopping bits and not really do any "processing" at all. :)
****
The thing that makes the brain interesting is the massive parallelism on information
retrieval, and the ability to have insights far beyond anything that mechanical emulation
would suggest given the neural delays. But in most cases, we think no faster than we
talk. This has been demonstrated many times. But we are *excellent* pattern recognizers.
Example: I was once handed, by a researcher, a 1-page FORTRAN program. It was placed
face-down on the table, and the back of the assignment described a bug in the program, in
the form "The program is expected to produce an answer X, and instead it produces an
answer Y". The goal was to measure how long it took to isolate the bug. The program was
about 40 lines of code.
So they said "Go", hit the stopwatch, and I turned it over. I found the bug in 35
seconds, and was sort of disappointed it had taken so long. It turns out the BEST anyone
had done before this was seven minutes!
Why? Because I had a debugging pattern in my head already. When I turned the page over,
I just scanned the code for the print statement, and worked backwards three statements to
the erroneous statement. All other subjects had been undergraduates, who felt they had to
start at the top of the program, read it line by line, and understand it before they could
fix the bug. I had a completely different pattern, and applied it (at that point, I had
been a programmer about 16 years). THIS is the power of the human brain. But, when I was
asked to redo it and speak aloud what I was doing, it took about 40 seconds for me to
repeat my reasoning aloud, comparable in time to my solution time. At that point, the
researchers had never considered that highly-experienced programmers had a different
paradigm than beginners. As research proved later, this is true in a variety of human
activities; the difference between a professional and an amateur can largely be
categorized as the richness of patterns based on experience.
We are among the very best pattern-matchers around. And if you look at much of AI
research, it is attempting to synthesize powerful pattern recognizers based on experience
to enrich that capability. Humans are born with this as part of their "base ROM" code!
joe
>First, I have to say I never though I would generate that kind of debate for
>asking a question...
>
>Second, forgive me to not give you snippets and forgive me to come from a
>Assembly/ C low level programming background...
>
>To answer some of your questions.
>
>1- No , I didn`t create my thread suspended from what I have read on MSDN it
>didn`t look mandatory, I thought calling AfxEndThread(0, FALSE); was enough
>to not delete the object automatically. Should I create the thread
>suspended and change m_bAutoDelete to FALSE before running the thread
>instead?
****
Yes. AfxEndThread terminates the thread. And if you look at the code, it deletes the
object. BUt it also leaves all of your other objects on the stack undestructed. Deadly
dangerous. Never use it.
****
>
>2- I used an auto-reset event because I wanted the event to be treated only
>once. Using a manual reset event could get this section of code done
>several times since I was using asynchronous threads.
****
Wrong approach. There are race conditions with auto-reset events. If you want something
done exactly once, an event is probably the wrong model. An asynchronous notification of
the thread responsible for the action is a better choice, for example, PostMessage to a
window of the main UI thread. There are a bunch of other approaches, all of them vastly
more reliable than an auto-reset event. In fact, I have never, in all my years of Windows
programming, come up with a really good reason to use an auto-reset event.
I'd respond more, but I have to rush off to a concert at this point. I will re-examine
this question tomorrow, when I have more time. I don't have time to read the code right
now (my alarm on my cell phone just went off, I'd lost track of time).
joe
*****
> [SNIP]
> As research proved later, this is true in a variety of human
> activities; the difference between a professional and an amateur can largely be
> categorized as the richness of patterns based on experience.
>
> [SNIP]
And the high ability to disseminate. Chemical engineers knew this for
a long time! We were the original systems and pattern recognition
coders! :)
--
HLS
****
This is inappropriate unless you plan to have every instance of the program using the
exact same event. Generally, you do NOT want to give a kernel synchronization object a
name; NULL makes it local to the process.
****
> if(m_hThreadEvent != NULL)
****
Nesting adds complexity. The correct response is
if(m_hThreadEvent == NULL)
return false;
The myth of one exit point was designed to make programs hard to create and understand.
****
> {
>//Start Thread
>m_pUpdateThread = AfxBeginThread(UpdateThread,this);
>
>if(m_pUpdateThread != NULL)
>{
> if(m_IsMeteringEnabled == true)
>{
>EnableMetering(true);
>}
>}
>else
>{
>SetEvent(m_hThreadEvent);
>if(WaitForSingleObject(m_pUpdateThread->m_hThread,INFINITE) ==
>WAIT_OBJECT_0)
****
This is exceptionally poor code. There are several possible conditions, including what
happens if the handle is incorrect. Note that if the handle is incorrect, this doesn't
delete the thread object, but merely drops through.
>{
> delete m_pUpdateThread;
****
Wrong. If you created the thread without setting m_bAutoDelete to FALSE, it has already
been deleted, and this code is erroneous because it attempts to delete the object a second
time. This will cause an assertion failure in the storage allocator when it discovers you
have tried to delete an object twice.
****
>}
>}
>}
*****
When inserting code, make sure the nesting of braces is maintained; I have no idea what
close brace matches what open brace, and I stopped doing manual brace matching years ago.
Also, since this is a bool function, why is it not returning a value here?
****
>
>bool CNWCManager::CloseConnection(void)
>{
>bool bResult = false;
>
>//End Thread
>if(m_hThreadEvent != NULL)
>{
>SetEvent(m_hThreadEvent);
>
>if(WaitForSingleObject(m_pUpdateThread->m_hThread,INFINITE) ==
>WAIT_OBJECT_0)
>{
>delete m_pUpdateThread;
****
Same error as above. Apparently you are using this event as a termination event. You
can't delete the obect twice.
****
>m_pUpdateThread = NULL;
>}
>
>if(m_hMeteringThreadEvent != NULL)
>{
>SetEvent(m_hMeteringThreadEvent);
****
There is no evidence of where this event is created. WHat is it, and how does it relate
to m_hThreadEvent?
*****
>
>if(WaitForSingleObject(m_pMeteringThread->m_hThread,INFINITE) ==
>WAIT_OBJECT_0)
>{
>delete m_pMeteringThread;
****
This is probably the same double-deletion error as before
****
>m_pMeteringThread = NULL;
>}
>}
>
>bResult = true;
>}
>
>//Close COM port
>CloseHandle(m_hComm);
>
>return bResult;
>}
>
>UINT CNWCManager::UpdateThread(LPVOID pParam)
>{
>CNWCManager* pObject = (CNWCManager*) pParam;
>HANDLE hEndThread = CreateEvent(NULL, FALSE, FALSE,
>_T("KillUpdateLampThread"));
****
Inappropriate, and probably incorrect, to have given this object a name. Also, why is
this object created inside the thread? There's something seriously wrong with this. You
have to assume the thread can terminate before it ever gets here, so nothing outside the
thread can use this handle. Since it is used for thread shutdown, it has to be
pre-created and exist before the thread is started, there's no way to assume that it is
valid otherwise.
****
> HANDLE hMeteringEvent = CreateEvent(NULL, FALSE, FALSE,
>_T("MeteringEvent"));
****
Ditto. I note that you are attempting to do a SetEvent on this object handle without
knowing if it exists. There are serious problems here. If the outer thread touches the
handle, it must create it; that's the only way it can know this exists. You cannot rely
on "likely" scenarios for correctness, it must be correct under all possible conditions.
****
> HANDLE hEventArray[2];
> hEventArray[0] = hEndThread;
> hEventArray[1] = hMeteringEvent;
>DWORD EventStatus;
>bool ExitThread = false;
>
>do
>{
>EventStatus = WaitForMultipleObjects(2, hEventArray, FALSE, 5000);
>
>switch(EventStatus)
>{
>case WAIT_OBJECT_0:
>ExitThread = true;
>break;
>
>case WAIT_OBJECT_0 + 1:
>pObject->MeteringUpdate();
>break;
>
>case WAIT_TIMEOUT:
>pObject->SendUpdate();
>break;
>
>case WAIT_FAILED:
>ExitThread = false;
>break;
>default:
>ExitThread = false;
>break;
>}
****
This is, of course, the correct way to wait on an object. Cover all cases.
****
>}
>while(ExitThread == false);
****
It is silly to compare a boolean variable to a boolean literal so you can form a boolean
test. WHat part of "this is a boolean variable" did you miss? It stands for its own
value.
while(!ExitThread);
would be the correct way.
Note also that such a test is counterintuitive. I would have done
bool running = TRUE;
and then I could have written
while(running)
{
... thread loop here
}
It is not at all clear why you use the do {loop} while(condition) construct instead of the
more obvious while() {loop} construct, which is easier to read and understand. I don't
have to go all the way down the page to find the trivial termination condition. It is
very rare to need do...while and always implies the loop must be done at least once. In
this case, if the thread-running condition is false, the loop should not be done at all.
The while() { loop } construct is far more comprehensible.
****
>
>CloseHandle(hEndThread);
> CloseHandle(hMeteringEvent);
****
The thread should only close handles it creates. It should not close handles it did not
create if anyone else can use them. In this case, because hEndThread can be used outside
the thread, it must be created before the thread starts and destroyed by the creator. It
is not clear what hMeteringEvent does, since it is created by some totally separate
thread, and you have NO IDEA of the sequencing of these threads.
****
>
>AfxEndThread(0, FALSE);
****
Always, and forever, a design error. The ONLY correct way to exit a thread is return from
the top-level thread function, period. Otherwise, you have NO IDEA what values might be
lost. So just lose the idea that there is any other way to terminate a thread.
****
>return 0;
>}
>
>UINT CNWCManager::MeteringTimerThread (LPVOID pParam)
>{
>CNWCManager* pObject = (CNWCManager*) pParam;
>bool ExitThread = false;
>DWORD EventStatus;
>HANDLE hEndThread = CreateEvent(NULL, FALSE, FALSE,
>_T("KillMeteringThread"));
> HANDLE hMeteringEvent = CreateEvent(NULL, FALSE, FALSE,
>_T("MeteringEvent"));
****
In both of the above lines, completely inappropriate to give this event a name unless you
expect it to be other than process-local. Nothing I have seen here suggests that it could
be other than process local.
You appear to be gathering data from a serial port. Think of the implications if the
machine has (as one of mine does) 50 serial ports and you want to run three copies of the
program talking to three different devices. Naming the events results in total
catastrophe.
****
>
>do
>{
>EventStatus = WaitForSingleObject(hEndThread, 60000);
>
>switch(EventStatus)
>{
>case WAIT_OBJECT_0:
>ExitThread = true;
>break;
>
>case WAIT_TIMEOUT:
> SetEvent(hMeteringEvent
> );
****
This is confusing. You have created an event called hMeteringEvent entirely within this
thread, and no other thread can use it. Yet you set it. I don't even follow how this
could possibly be useful.
*****
>break;
>
>case WAIT_FAILED:
>ExitThread = false;
****
See what I mean about counterintuitive? I would think that if there is a serious error,
you want to exit the thread. If so, you should set it true; note that it is ALREADY false
so setting it false again does nothing useful, so this assignment is at worst incorrect
and at best completely perplexing.
****
>break;
>default:
>ExitThread = false;
****
Ditto
****
>break;
>}
>}
>while(ExitThread == false);
****
See previous comment about comparison and the use of a counterintuitive name
****
>
>CloseHandle(hEndThread);
****
I don't understand the purpose of this handle. It is a local variable and cannot be set
by any outside thread, so what good does it do? Alternatively, if it is set by some other
thread, then this thread must not own it and therefore must not either create or delete
it.
*****
>
>AfxEndThread(0, FALSE);
****
See previous comment
****
>return 0;
>}
>
>void CNWCManager::MeteringUpdate (void)
>{
> m_CommunicationMutex.Lock();
> //Callback to dialog for updating the metering values
> m_pCallback[1]->Execute(NULL);
> m_CommunicationMutex.Unlock();
****
If you set a lock, you are already in deep trouble. You should not be doing locking like
this between threads, especially involving a callback, because you can deadlock,
trivially. It is the responsibility of the callback to handle any synchronization. But
in a well-designed system, no locks would be required because there would be no data
shared between threads. Since I have no idea at this point why you need a lock, or why
you even think data sharing is appropriate, I am deeply suspicious of the entire
structure.
****
>}
>
>void CNWCManager::EnableMetering(bool bIsEnable)
>{
>//Create Timer Thread for metering
>if(bIsEnable == true)
****
Duh. If bIsEnable is true, you don't need to compare it to true to get a true result that
tells you it was already true!
if(bIsEnable)
****
>{
>if(m_hMeteringThreadEvent == NULL)
>{
>m_hMeteringThreadEvent = CreateEvent(NULL, FALSE, FALSE,
>_T("KillMeteringThread"));
>}
****
See previous remarks on naming a kernel object. Inappropriate.
At least here it appears you are creating the kill event in the thread that manages the
other thread, which is correct.
In addition, you have used confusing names, such as MeteringThreadEvent, in ways that seem
to conflict with other uses. I am getting completely confused by the bad names.
****
>
>if(m_pMeteringThread == NULL)
>{
>m_pMeteringThread = AfxBeginThread(MeteringTimerThread,this);
>}
****
There are so many race conditions here I can't even begin to explain them all. Ultimately,
you have a chance that the thread is dying but has not yet died and yet you are not
creating a new one. This code scares me.
****
>
>m_IsMeteringEnabled = true;
>}
>else //Kill Thread
>{
>if(m_hMeteringThreadEvent != NULL && m_pMeteringThread != NULL)
>{
>SetEvent(m_hMeteringThreadEvent);
>
>if(WaitForSingleObject(m_pMeteringThread->m_hThread,INFINITE) ==
>WAIT_OBJECT_0)
>{
>delete m_pMeteringThread;
>m_pMeteringThread = NULL;
>}
****
The above code finally broke me. I cannot reason about anything this convoluted and
complex. Too many similar names, too many odd conditions. Here's what you do:
Start the program
Create the threads, both of them, exactly once, period.
There is one common shutdown event between the two, created by the main thread and managed
by it and it alone.
If I can't reason about it, then it is far too complex to work.
****
>
>CloseHandle(m_hMeteringThreadEvent);
>m_hMeteringThreadEvent = NULL;
>m_IsMeteringEnabled = false;
>}
>}
>}
>
>in the threads debug window I have 3 threads, the main, CWnd:UpdateWindow()
>and my MeteringTimerThread.
>
>callstack for CWnd::UpdateWindow
> ntdll.dll!76ef64f4()
> [Frames below may be incorrect and/or missing, no symbols loaded for
>ntdll.dll] user32.dll!77024341()
> user32.dll!77022bfe()
>> TLAC Demo.exe!CWnd::UpdateWindow() Line 142 + 0x39 bytes C++
> TLAC Demo.exe!CFontStatic::RedrawFont() Line 259 C++
****
Already I see problems here. You are apparently attempting to touch a window in your
callback. This is deeply wrong, and MUST NOT BE ALLOWED TO HAPPEN! You must NOT call
UpdateWindow from your thread. You must NEVER touch a window from a thread.
This is one of the problems with callbacks and how to reason about them. You treat it as
if it is running in the context of your main GUI thread. It is not, and therefore must
obey serious limitations. In particular, it may not call ANY method which uses an HWND,
*except* PostMessage (which doesn't touch the HWND, just the queue associated with the
HWND, a critical distincition). Remove ANY code from your callback that touches a window!
Only then can you even START to make sense of this mess.
****
> TLAC Demo.exe!CTLACDemoDlg::CallbackUpdateStatus(void * Param=0x015982b4)
>Line 668 C++
> TLAC Demo.exe!TCallback<CTLACDemoDlg>::Execute(void * Param=0x015982b4)
>Line 30 + 0x1d bytes C++
> TLAC Demo.exe!CNWCManager::SendUpdate() Line 399 + 0x1b bytes C++
> TLAC Demo.exe!CNWCManager::UpdateThread(void * pParam=0x01598270) Line
>210 C++
> TLAC Demo.exe!_AfxThreadEntry(void * pParam=0x0012e654) Line 109 + 0xf
>bytes C++
> TLAC Demo.exe!_callthreadstartex() Line 348 + 0xf bytes C
> TLAC Demo.exe!_threadstartex(void * ptd=0x0159a788) Line 331 C
> kernel32.dll!76621194()
> ntdll.dll!76f0b3f5()
> ntdll.dll!76f0b3c8()
>
>callstack for CNWCManager::MeteringTimerThread
****
Alas, since the symbols are missing, it is hard to tell what is going on here. But there
are so many deep flaws in this design that you have to redo it before there is any hope of
making sense of this. I still do not see why an autoreset event is required anywhere. But
the deadlock is typical of any case where a thread attempts to manipulate a window, and
that lock of the mutex before the callback is also very deadly.
In multithreaded programming, if you have to set locks, you have already lost. Locks
should be limited to incredibly low-level features where all the manipulation is contained
between the lock calls (no external calls) so you know the lock is a "leaf" in the locking
tree. You cannot reason well about locks, because nobody can reason well about locks
(myself included). So locks should be limited to something that, for example, adds a
value to a shared queue. The asynchronous agent paradigm is your best solution, and in
that you hardly ever have to consider locking anything (true, there must be locks, but
they are so low-level that reasoning about them is irrelevant).
I can tell you right now that if I got this piece of code (and was being paid to fix it)
the very first thing I would do is scrap it and rewrite it as something comprehensible,
something without any mutexes or other explicit locks (and this means you have to maintain
data integrity using other, better techniques, such as asynchronous agents), come up with
better factoring of the responsibility of owning events, eliminate any use of an
auto-reset event for any reason whatsoever (if there is a reason, it means the design is
probably wrong), clean up the names, establish proper scope of variables, and overall,
nothing of this current thread-and-synchronization structure would remain.
Don't write code that is hard to understand. You'll probably get it wrong. You probably
did. Part of my survival mechanism in a complex multithreaded world is to never write
complex code; I limit myself to simple solutions that work the first time, and cannot be
made to fail. Your code is riddled with race conditions that are, frankly, scary.
****
> [Frames below may be incorrect and/or missing, no symbols loaded for
>ntdll.dll] ntdll.dll!76ef5e6c()
> KernelBase.dll!750b179c()
> kernel32.dll!7661f003()
> kernel32.dll!7661efb2()
>> TLAC Demo.exe!CNWCManager::MeteringTimerThread(void * pParam=0x01598270)
>> Line 281 + 0x11 bytes C++
> TLAC Demo.exe!_AfxThreadEntry(void * pParam=0x0012e688) Line 109 + 0xf
>bytes C++
> TLAC Demo.exe!_callthreadstartex() Line 348 + 0xf bytes C
> TLAC Demo.exe!_threadstartex(void * ptd=0x01591cb0) Line 331 C
> kernel32.dll!76621194()
> ntdll.dll!76f0b3f5()
> ntdll.dll!76f0b3c8()
>
>______________
>
>That is the essential of my class. The dialog calls openconnection and
>closeconnection to start the threads and terminate them. EnableMetering is
>used to activate the second thread for the optional MeteringUpdate () call
>in UpdateThread();
****
Create all the threads. If you want metering, you enable it by doing a SetEvent of the
metering event. But this business about distributed thread creation is hard to
understand. Assume you ALWAYS want metering, and the WFMO waits on two events. If you
never enable metering, the thread just sits there.
This whole thing is far too complex to understand. There are too many scoping issues,
race conditions, object ownership issues, and ultimately, the fact that you ever, under
any conditions, ask a thread to touch a window. Any one of these failures is deadly to
comprehension, and you have all them them; the combination results in code that is,
essentially, incomprehensible. It is nearly impossible to reason about the timing and
failure modes.
It is not clear what the metering thread actually does that can't be handled by
WM_SETTIMER in the main thread simply doing a SetEvent. Alternatively, it might be doable
by timerSetEvent (whose callback is always in a separate thread) without all this complex
overhead of events.
Sometimes I accuse people of taking simple problems and making them complex. You have
taken a problem which is not exactly simple, but rendered it not only complex, but
incomprehensible. If you have a complex problem, you must work hard to make it as simple
as it can possibly be made, and the above code is far from that goal. So while I think
your problem is not trivial, it isn't nearly as complex as the code you've written makes
it.
joe
*****
>After reading your suggestions, I have changed the way I communicate between
>the worker thread and the UI. I used message via PostMessage() which solved
>almost all my issues. The only thing left was to pay attention and not
>calling SuspendThread() several times without calling ResumeThread() each
>time because the counter will not be at zero when I will try to stop the
>thread and therefore causing a deadlock on my thread handle
>WaitForSingleObject() .
****
If you call SuspendThread for any reason whatsoever, your design is broken beyond
recovery. NEVER, EVER use SuspendThread. And the only place you use ResumeThread is
after you create a thread with the CREATE_SUSPENDED flag. You call it exactly once, for
the lifetime of the thread.
Trust me, you are in very, very, VERY deep trouble if you ever call SuspendThread, you
just haven't hit the utlimately fatal set of conditions you will eventually hit. Whatever
it takes, you MUST remove that call from your program!
****
>
>Despite the fact that I found out your way to answer a bit harsh , I will
>thank you to point me in a good direction to solve my problem.
****
For me, I gave a polite answer. See my previous answer. The code presented is a mess, to
put it mildly.
joe
****
If you think "multiple inheritance" is bad in C++, think about it in the real world!
(a) in some cases you don't know which of several classes might be the father
(b) you have no idea if the classes compose properly and there is no syntax
checker!
(c) the methods and variables combine randomly
(d) for years, the new class is ill-behaved
(e) in later years, memory leaks are common
(f) in the first couple years, leaks are also a problem
Modern technology has solved some of these, for example
(a) DNA tests
(b) DNA genetic counseling
(c) [still leading-edge research on lower life forms] genetic engineering
(d) [no solution, but eliminating ages 0-2 and 13-19 has been suggested]
(e) Drugs are now available to help with this
(d) Disposable diapers
joe
Joseph M. Newcomer wrote:
--
HLS