I think what was said was that designing a class with HUNDREDS of
getters/setters is flawed. Having, say, FOUR getters and setters is
common. I don't think you could write something like Qt without having
some getters and setters somewhere. But you're talking about a hundred
or more. That's just bad design.
> In another words, my answer is no and design is not flawed.
Whatever your reasoning for this is, it doesn't matter. No one will
use a class with a hundred setters.
> You can declare and define 100 member functions in private.
This is a frightening proposal right here. As others have pointed out
to you already, this is a good indicator that you have a class which
does more than one thing. "Best C++ practice" would be to have a class
encapsulate _one_ interface.
> What happen if you change data member’s variable name?
I hit CTRL+R and do a find and replace. Two minutes time. At most.
> You have to change it to all 100 member functions.
You suggest (just above) that data member names are subject to change,
but why do you think the get/set function name will not change? If you
change the getter/setter name then you have the same problem. i.e.,
your "waste of time" can happen in both designs. Using getters/setters
does not save you from this problem.
> Black chip is like CPU. You define getReadWrite(), getAddressBus(),
> getDataBus(), setDataBus(), and run() in public on interface.
> The client needs to use all member functions in interface.
What? Why? From what I understand you propose that there are several
objects: instances of CPU, DataBus, AddressBus, etc. Together these
might form a Computer. (Note that DataBus and AddressBus are not part
of the CPU.) Then the Computer should be the interface. Internally,
the Computer can use your setDataBus() member function to connect the
different components. But why would a client want this functionality?
For example, I am interacting with my computer right now. I'm not
doing it through the CPU, or the bus. I'm using the keyboard. If I
were to go to a different computer, with different CPU and bus, I
would still want to use the keyboard. Give your client a "keyboard".
>You are going to agree that CPU class is a good design.
No. See above. At the very _least_ it is questionable.
--Jonathan
I wrote a post on 23 April discussing this, but didn't get any reply
(either from you or from anyone else). I don't know if you have me
blocked or whether you just didn't feel like commenting. Perhaps you
could reply just to confirm you are reading my stuff? Otherwise it's a
bit of a waste me writing it...
Searching for "setmon" in the group (eg in Google) brings up my
earlier post.
Regards.
Paul.
> My question remains unanswered. You can declare and define 100
> member functions in private. You should always put getters / setters
> in each member function out of 100 member functions. What happen if
> you change data member�s variable name? You have to change it to all
> 100 member functions. Modification wastes your time.
The good thing about getters and setters is that you define an
interface. The name of the member variables that the getters and
setters access can change as long as the clients of the class
use the getters and setters. This allows true data hiding; the
getters and setters may not use variables at all!!!.
> You should not use data members directly from 100 member functions.
> You always access data members indirectly through getters / setters�
> member function.
I have yet to encounter an application that has 100 member functions
for accessing member variables. I choose to break up things into
smaller, more manageable pieces; usually by themes.
> Let me give you an example like you described engine in earlier
> posts. Think of black chip. Black chip is like CPU. You define
> getReadWrite(), getAddressBus(), getDataBus(), setDataBus(), and run()
> in public on interface.
What is black chip?
I would never design a CPU emulator in that fashion. CPUs are bit
more complicated.
[Snip -- description of CPU class design]
> Can you please state your opinion for the best C++ practice? Should
> you write direct data members to all 100 member functions before you
> test all of them to be 100% successful? Or�should you write indirect
> data members through getters / setters to all 100 member functions and
> they are defined in private?
My best practices is to start small, get it working, then build upon
the working base. Search the web for "Test Driven Development".
My current best practice is to invest in an interface class and
implementation classes. The interface class allows for different
implementations.
--
Thomas Matthews
C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
http://www.sgi.com/tech/stl -- Standard Template Library
> We discussed proxy class with getters / setters in earlier posts.
> You said design is flawed if getters / setters are defined in public.
> In another words, my answer is no and design is not flawed.
That depends on the class. The term "class" in C++ means many different
things depending on the situation, and whether or not getters and
setters are appropriate depends on what the class in question is for.
For example:
Class as data bucket. These sorts of classes generally have no
intra-member-variable invariants. For these sorts of classes,
getters/setters are a waste of time and resources. An example of such a
class would be std::pair.
Class as namespace. Classes of this sort are characterized by having
several types defined within them and rarely have member-variables. An
example of such a class would be std::unary_function.
Class as server. These classes generally perform specific tasks at the
behest of their client. They are characterized by a plethora of
member-functions that can only be called when the object is in a
specific state and their functions change the state of the object in
very specific ways. They also often have public invariants. For these
classes, getters and setters are very important. An example would be
std::vector.
Class as object. These sorts of classes usually encapsulate some sort of
state machine, they have maximum flexibility in how to handle their
internal data. These kinds of classes generally don't have any setters
at all (i.e., there are few, if any, functions that have stated
post-conditions that the caller can rely on.) For these kinds of
classes, setters are generally a bad idea, and getters are primarily in
the class to facilitate testing only. I don't know of any examples of
this sort of class in the standard library, but they are quite common in
GUI interface libraries.
You can, of course, have classes that are some mix of two or more of the
above arch-types, but in those cases it is likely that you are trying to
do too much with the class in question and should probably break it up.
Lastly, there may be other arch-types not listed above. These are simply
the ones that came to me in the corse of writing this message.
[ ... ]
> You should not use data members directly from 100 member functions.
> You always access data members indirectly through getters / setters?
> member function.
Maybe -- but then again, maybe not. Others have already pointed out
that a class with hundreds of member functions is generally a poor
design choice from the beginning. Since you seem to have largely
ignored what they've said, I'm going to start from the example you
gave (a CPU emulator) and try to give a general sketch of one type of
design that avoids most of what you've described.
> Let me give you an example like you described engine in earlier
> posts. Think of black chip. Black chip is like CPU. You define
> getReadWrite(), getAddressBus(), getDataBus(), setDataBus(), and run()
> in public on interface.
As it happens, I've written a couple of CPU emulators and simulators
over the years. The only member function you've given above that I
can see any possibility of really including is 'run()'.
I've never written or seen a CPU emulator with hundreds of public
member functions or anywhere close to it -- a half dozen or so is
more like it. A few use a private member function for each
instruction, and have a decoder that looks at the instruction and
uses those member functions to execute the instructions. When I was
first learning (about) C++, that sounded reasonable, but I've since
decided it's usually less than optimal.
IMO, a better design has two class hierarchies: one for instructions
and one for operands. The instruction classes are all functors that
derive from a base "Instruction" class that just defines the basic
functor interface:
struct instruction {
void operator()(CPU_state ®s) = 0;
};
The class for operands basically deals with fetching operands for the
instructions, based on the addressing mode. It's also pretty trivial,
but varies based on the type of addressing supported by the CPU --
for example, at least for most instructions, a RISC CPU only supports
working with registers, but a CISC will allow working with operands
that are fetched from memory.
As implied in the definition of the instruction class above, the CPU
state is separate from the CPU itself. The CPU decodes instructions
and executes them by passing a CPU state object to the implementation
of an individual instruction. The instruction will typically use an
operand object to fetch operands. Side effects of the instruction
will be recorded into the CPU state.
In my early designs, the CPU state was basically just dumb data. In
my newer designs, however, I've made it more intelligent. In
particular, I have a small hierarchy of register objects that support
breaks on reading, writing, writing a specific value, etc.
FWIW, even though it's probably not the most efficient design in the
world, in quite a few cases I define the CPU's memory as a vector of
registers as well, giving exactly the same kinds of breakpoints there
essentially for free. Most of the emulators I've written have been
for 8-bit CPUs though -- if I had to support lots of memory, this
might be a lot less practical.
The debugger is also a separate class. Again, it receives a CPU state
that it can examine and possibly manipulate. In most cases it's
entirely possible for it to manipulate the state so that (for
example) the client code running on the emulated CPU will "crash" --
e.g. go into an endless loop. Given that the exact same can be done
with a normal debugger running on a hardware CPU, this is neither
surprising nor something to be prevented. Generally speaking, the
opposite is true: if something can happen on the real CPU, the
emulator should support the same thing.
These classes have around a half dozen member functions, not hundreds
or even dozens. Quite a few of them only have one or two.
--
Later,
Jerry.