Catalog Service

49 views
Skip to first unread message

Luke Kanies

unread,
Sep 15, 2010, 7:26:55 PM9/15/10
to puppe...@googlegroups.com
Hi all,

I've just stuck my proposal for a Catalog Service (which we've been bandying about internally for a while, and which I've been thinking about even longer) on the wiki:

http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture

Comments appreciated, and ideas for how you would use it appreciated even more.

--
My favorite was a professor at a University I Used To Be Associated
With who claimed that our requirement of a non-alphabetic character in
our passwords was an abridgement of his freedom of speech.
-- Jacob Haller
---------------------------------------------------------------------
Luke Kanies -|- http://puppetlabs.com -|- +1(615)594-8199


David Schmitt

unread,
Sep 17, 2010, 4:41:14 AM9/17/10
to puppe...@googlegroups.com
On 9/16/2010 1:26 AM, Luke Kanies wrote:
> Hi all,
>
> I've just stuck my proposal for a Catalog Service (which we've been
> bandying about internally for a while, and which I've been thinking
> about even longer) on the wiki:
>
> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture

Interesting read :-) here're a few notes:

* document needs list of proposed functional changes, afaict:
* insert RESTful API between puppetmaster and Catalog storage,
thereby exposing a proper interface
* decouple compilation and catalog serving completely
* btw, using futures, one could compile a "template" catalog
and only insert the changing fact values quickly?
* enrichen search API to cover all resources and complex queries
* implement additional backends
* simple, no external dependencies
* massively scalable, using some nosql solution

* I'm wondering how the flat file based backend will perform in the face
of 100 systems. My intuition says that traditional SQL storage will
remain a viable (perfomance vs. configuration) solution in this space.

* re directly exposing back-end interface: only an artifact of a badly
designed API. If this really becomes a problem, perhaps building a more
complex query, e.g. looking for multiple resource types, might be viable
to avoid strong coupling to the backend

* I'm reminded of a trick i used in the early days to emulate a Catalog
query in the main scope:

case $hostname {
'monitoring': {
# apply monitoring stuff
}
'webserver': {
# install webserver
}
}

Today it looks like an awful hack, but the underlying principle might
prove interesting, even if only to strengthen the case of Catalog
storage by discarding it.

To contrast this with a modern implementation:

class monitoring {
Monitoring::Service<<||>>
}

define monitoring::service::http() {
@@monitoring::service {
"http_${fqdn}_${name}":
command => "check_http",
target => $fqdn,
args => $port;
}
}

class webserver {
monitoring::service::http { $port: }
}

The main difference between the two solutions is the dataflow. In the
first solution, different resources are created from the same
configuration, depending on the environment. In the latter version,
compiling one manifest alters the environment for the other nodes.

Suddenly that sounds so wrong :) If all facts/nodes are available on the
server, shouldn't the puppetmaster be able to compile all Catalogs in
one step? Is the next manifest legal? Discuss!

node A { y { 1: } }
node B { x { 1: } }

define y() {
$next = $name+1
@@x { $next: }
Y<<||>>
}

define x() {
$next = $name+1
@@y { $next: }
X<<||>>
}

If I'm not completely off, this will create lots and lots of resources
as A and B are evaluated alternatively.

The last part might be a little bit off-topic, but I think it does
pertain to the whole "all-nodes-are-part-of-the-system" thinking that is
the motivation for Catalog storage/queries.


Best Regards, David
--
dasz.at OG Tel: +43 (0)664 2602670 Web: http://dasz.at
Klosterneuburg UID: ATU64260999

FB-Nr.: FN 309285 g FB-Gericht: LG Korneuburg

Brice Figureau

unread,
Sep 17, 2010, 6:08:23 AM9/17/10
to puppe...@googlegroups.com
On Wed, 2010-09-15 at 16:26 -0700, Luke Kanies wrote:
> Hi all,
>
> I've just stuck my proposal for a Catalog Service (which we've been
> bandying about internally for a while, and which I've been thinking
> about even longer) on the wiki:
>
> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>
> Comments appreciated, and ideas for how you would use it appreciated
> even more.

I need to read it again, but the envisioned change looks really
interesting. I like this move :)

There's something interesting about node dependencies. I understand how
all the nodes catalogs form a graph, with one node catalog being a
subgraph. However, is there a plan to extend the puppet/ruby DSL to
express/enforce such dependencies?
I'm also wondering what would happen when serving a catalog of node A
which requires some resources from node B which hasn't been yet
compiled. Compiling node B might not be an option because we need those
node facts (BTW, how do the facts fit in this system?).

It would be great if your document could more clearly explain the
various possible architectures (ie combined master/catalog service,
multiple master to one catalog service, decentralized catalog services
if possible...). Also it seems you imply it is needed to run both a
master and a catalog service as separate processes (even on the same
host), which I thought wasn't necessary (ie indirect directly to the
terminus instead of :rest).

Regarding queries:
I'm not sure I like the second proposed query system where you separate
tags and parameters. I prefer the current system where parameters are in
the "top namespace" like tags.
I remember we also support OR which are not in your proposal. I
personally don't use any ORs, and I can understand how it can make
everything complex...

Regarding back-ends:
I have lots of doubts about the performance of the text-file solution,
though.

One of the nice things of using a RDBMS is that you can use one or
another quite independently. Once we'll chose a specific NoSQL or a
graph database, I'm afraid we'll have to live a long time with it.
I'm not talking about implementation, but actual use of the datastore,
which might prove to be painful, if for some reason we find it doesn't
work as advertised and we need to switch to the new kid on the block.

If possible, I think your document should better enforce the fact
back-ends will be plugins (ie abstracted), so that we could let the user
chose among RDBMS or NoSQL (provided those plugins exists of course).

And finally, you're talking about migration tool. I don't really see how
it will be possible to migrate from the current storeconfigs back-end to
a newer system since you told us that one of the issue of storeconfigs
was it was lacking some necessary information. The only way I would see
a migration is: wait all your nodes ask for a catalog, then switch off
storeconfigs :)

About storing all the revisions of a given catalog, one of the issue
would be that you need to implement in the service an option to retrieve
one specific version or the list of all versions. This can be easy to do
with couchdb because it's how it works, but other NoSQL solutions might
not be able to provide an easy way to do this. Of course this is an
interesting and powerful feature for auditing changes (especially if
there is a corresponding audit tool).

Oh and I learnt a new word: paucity. Not sure I'll be able to use it in
conversations, but thanks anyway :)
--
Brice Figureau
Follow the latest Puppet Community evolutions on www.planetpuppet.org!

Luke Kanies

unread,
Sep 17, 2010, 5:03:29 PM9/17/10
to puppe...@googlegroups.com
On Sep 17, 2010, at 4:41 AM, David Schmitt wrote:

> On 9/16/2010 1:26 AM, Luke Kanies wrote:
>> Hi all,
>>
>> I've just stuck my proposal for a Catalog Service (which we've been
>> bandying about internally for a while, and which I've been thinking
>> about even longer) on the wiki:
>>
>> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>
> Interesting read :-) here're a few notes:

I'll respond to the notes as necessary and update the document (probably on this next flight) as appropriate, but separately.

I misread your notes at first - this first section is really about being very clear as to the steps necessary to create this, right? I.e., it's an explicit description of the work necessary to implement the document's goals?

> * document needs list of proposed functional changes, afaict:
> * insert RESTful API between puppetmaster and Catalog storage,
> thereby exposing a proper interface
> * decouple compilation and catalog serving completely
> * btw, using futures, one could compile a "template" catalog
> and only insert the changing fact values quickly?

This could be done, but I doubt that futures (which are a function in the parser) would be the mechanism. I certainly wouldn't want to tie this to futures, though.

> * enrichen search API to cover all resources and complex queries
> * implement additional backends
> * simple, no external dependencies
> * massively scalable, using some nosql solution
>
> * I'm wondering how the flat file based backend will perform in the face of 100 systems. My intuition says that traditional SQL storage will remain a viable (perfomance vs. configuration) solution in this space.

I expect a file back end to perform poorly with 100 systems - I think 30 is a reasonable amount. I agree that SQL will continue to be viable, and quite possibly a better long-term direction, at least for the next few years.

> * re directly exposing back-end interface: only an artifact of a badly designed API. If this really becomes a problem, perhaps building a more complex query, e.g. looking for multiple resource types, might be viable to avoid strong coupling to the backend
>
> * I'm reminded of a trick i used in the early days to emulate a Catalog query in the main scope:
>
> case $hostname {
> 'monitoring': {
> # apply monitoring stuff
> }
> 'webserver': {
> # install webserver
> }
> }
>
> Today it looks like an awful hack, but the underlying principle might prove interesting, even if only to strengthen the case of Catalog storage by discarding it.

Yep, I did something very similar with Cfengine in 2003, and that work is in large part what drove me to write exported resources into Puppet. It works just as well in Puppet as it did in Cfengine, though, and in some ways it's superior. Note that I would tend to branch this by class membership rather than hostname.

In particular, it gives you the option of having an application-stack view; i.e., you can effectively say that a host is both a member of a given application stack and and also performs the database function, and from there Puppet can use conditionals to figure out all of the details. That's not always as visible using exported resources, although of course there are other benefits.

> To contrast this with a modern implementation:
>
> class monitoring {
> Monitoring::Service<<||>>
> }
>
> define monitoring::service::http() {
> @@monitoring::service {
> "http_${fqdn}_${name}":
> command => "check_http",
> target => $fqdn,
> args => $port;
> }
> }
>
> class webserver {
> monitoring::service::http { $port: }
> }
>
> The main difference between the two solutions is the dataflow. In the first solution, different resources are created from the same configuration, depending on the environment. In the latter version, compiling one manifest alters the environment for the other nodes.
>
> Suddenly that sounds so wrong :) If all facts/nodes are available on the server, shouldn't the puppetmaster be able to compile all Catalogs in one step? Is the next manifest legal? Discuss!

Yes, it should. Well, one step might be a stretch, but yeah, it should. I envision a catalog service dishing catalogs to clients, and a pool of compiler processes that pull compile requests off of a queue and compile as necessary. The compile requests can be created by the client -- which would be a normal model -- or by the dashboard, or as part of a commit hook in git, or whatever you want.

> node A { y { 1: } }
> node B { x { 1: } }
>
> define y() {
> $next = $name+1
> @@x { $next: }
> Y<<||>>
> }
>
> define x() {
> $next = $name+1
> @@y { $next: }
> X<<||>>
> }
>
> If I'm not completely off, this will create lots and lots of resources as A and B are evaluated alternatively.

This might quite possible destroy the universe if resource collection didn't ignore resources exported by the compiling host. Given that they do, though, you'd likely just get flapping and some very pissed coworkers.

> The last part might be a little bit off-topic, but I think it does pertain to the whole "all-nodes-are-part-of-the-system" thinking that is the motivation for Catalog storage/queries.

Yeah, that's a good point - one of the big goals here is to lose the 'nodes sit alone' perspective and really give them membership of part of a larger whole.

--
Men never do evil so completely and cheerfully as when they do it from a
religious conviction. --Blaise Pascal

Luke Kanies

unread,
Sep 17, 2010, 5:13:41 PM9/17/10
to puppe...@googlegroups.com
On Sep 17, 2010, at 6:08 AM, Brice Figureau wrote:

> On Wed, 2010-09-15 at 16:26 -0700, Luke Kanies wrote:
>> Hi all,
>>
>> I've just stuck my proposal for a Catalog Service (which we've been
>> bandying about internally for a while, and which I've been thinking
>> about even longer) on the wiki:
>>
>> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>>
>> Comments appreciated, and ideas for how you would use it appreciated
>> even more.
>
> I need to read it again, but the envisioned change looks really
> interesting. I like this move :)
>
> There's something interesting about node dependencies. I understand how
> all the nodes catalogs form a graph, with one node catalog being a
> subgraph. However, is there a plan to extend the puppet/ruby DSL to
> express/enforce such dependencies?

Not yet, but I've been thinking about it. Really, how they're used is more of a problem than the expression, since they should generally be created automatically from resource collection. However, my 'external resource' prototype from April or so of this year is a decent example of how this might work, if you want to start taking advantage of them.

One of the things this will force us to begin to resolve is what these dependencies should change in terms of behaviour. Obviously you've got things like a required host's catalog being updated resulting in the requiring host's catalog getting updated, but should the requiring host wait on a service to come up when running the transaction? All unclear, at this point.

> I'm also wondering what would happen when serving a catalog of node A
> which requires some resources from node B which hasn't been yet
> compiled. Compiling node B might not be an option because we need those
> node facts (BTW, how do the facts fit in this system?).

Theoretically, this can't happen - the dependencies are all automatic, resulting from a host pulling another host's resources into its catalog, which means that you don't actually know about a dependency until both hosts have compiled their catalog. This could easily be seen as either a bug or a feature.

As to facts, this service definitely (and explicitly, I believe) requires the Inventory Service, which is basically just a service that stores and dishes facts, and from the agent's perspective everything's exactly the same - upload facts, download catalog.

I do expect this to change at some point, with the agent having a separate 'register' operation that sends facts, and then the existing operation that pulls and runs the catalog. Or really, breaking it into three operations:

puppet agent register
puppet agent retrieve/update/whatever
puppet agent run

> It would be great if your document could more clearly explain the
> various possible architectures (ie combined master/catalog service,
> multiple master to one catalog service, decentralized catalog services
> if possible...). Also it seems you imply it is needed to run both a
> master and a catalog service as separate processes (even on the same
> host), which I thought wasn't necessary (ie indirect directly to the
> terminus instead of :rest).

I'll make this more clear. You should definitely be able to run it all in one process.

> Regarding queries:
> I'm not sure I like the second proposed query system where you separate
> tags and parameters. I prefer the current system where parameters are in
> the "top namespace" like tags.

I agree, but I wanted to make it clear both were at least feasible. I'll more obviously express a preference in the document.

> I remember we also support OR which are not in your proposal. I
> personally don't use any ORs, and I can understand how it can make
> everything complex...

I don't believe we do in the Rails integration - in the build_active_record_query method, we just do ANDing. I thought we'd added it, but the last time I said that, I think you corrected me. :)

> Regarding back-ends:
> I have lots of doubts about the performance of the text-file solution,
> though.

I completely agree. Can you think of another solution that doesn't add dependencies?

> One of the nice things of using a RDBMS is that you can use one or
> another quite independently. Once we'll chose a specific NoSQL or a
> graph database, I'm afraid we'll have to live a long time with it.
> I'm not talking about implementation, but actual use of the datastore,
> which might prove to be painful, if for some reason we find it doesn't
> work as advertised and we need to switch to the new kid on the block.
>
> If possible, I think your document should better enforce the fact
> back-ends will be plugins (ie abstracted), so that we could let the user
> chose among RDBMS or NoSQL (provided those plugins exists of course).

Ok.

> And finally, you're talking about migration tool. I don't really see how
> it will be possible to migrate from the current storeconfigs back-end to
> a newer system since you told us that one of the issue of storeconfigs
> was it was lacking some necessary information. The only way I would see
> a migration is: wait all your nodes ask for a catalog, then switch off
> storeconfigs :)

Yeah, probably true.

> About storing all the revisions of a given catalog, one of the issue
> would be that you need to implement in the service an option to retrieve
> one specific version or the list of all versions. This can be easy to do
> with couchdb because it's how it works, but other NoSQL solutions might
> not be able to provide an easy way to do this. Of course this is an
> interesting and powerful feature for auditing changes (especially if
> there is a corresponding audit tool).

I think this is going to be important for the future, but I didn't want to make it absolutely required for the first version. I agree that the api would need support for that, but it can't be that hard - get a specific version or date of a catalog for a specific host, right?

> Oh and I learnt a new word: paucity. Not sure I'll be able to use it in
> conversations, but thanks anyway :)

:)

--
I have never met a man so ignorant that I couldn't learn something
from him. --Galileo Galilei

David Schmitt

unread,
Sep 20, 2010, 10:15:57 AM9/20/10
to puppe...@googlegroups.com
On 9/17/2010 11:03 PM, Luke Kanies wrote:
> On Sep 17, 2010, at 4:41 AM, David Schmitt wrote:
>
>> On 9/16/2010 1:26 AM, Luke Kanies wrote:
>>> Hi all,
>>>
>>> I've just stuck my proposal for a Catalog Service (which we've
>>> been bandying about internally for a while, and which I've been
>>> thinking about even longer) on the wiki:
>>>
>>> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>>
>>
>>>Interesting read :-) here're a few notes:
>
> I'll respond to the notes as necessary and update the document
> (probably on this next flight) as appropriate, but separately.
>
> I misread your notes at first - this first section is really about
> being very clear as to the steps necessary to create this, right?
> I.e., it's an explicit description of the work necessary to implement
> the document's goals?

Yes. The document itself was lacking a bit structure in this respect and
I felt it needed a clear statement of the consequences.

>> * document needs list of proposed functional changes, afaict: *
>> insert RESTful API between puppetmaster and Catalog storage,
>> thereby exposing a proper interface * decouple compilation and
>> catalog serving completely * btw, using futures, one could compile
>> a "template" catalog and only insert the changing fact values
>> quickly?
>
> This could be done, but I doubt that futures (which are a function in
> the parser) would be the mechanism. I certainly wouldn't want to tie
> this to futures, though.

An implementation detail. I was just brainstorming here.

See further below for my vision. Beware though, that I'm having
delusions of grandeur lately ;-)

>> node A { y { 1: } } node B { x { 1: } }
>>
>> define y() { $next = $name+1 @@x { $next: } Y<<||>> }
>>
>> define x() { $next = $name+1 @@y { $next: } X<<||>> }
>>
>> If I'm not completely off, this will create lots and lots of
>> resources as A and B are evaluated alternatively.
>
> This might quite possible destroy the universe if resource collection
> didn't ignore resources exported by the compiling host. Given that
> they do, though, you'd likely just get flapping and some very pissed
> coworkers.

I don't think so:

@@file{"/tmp/foo": ensure=>present; }
File<<||>>

will create a "/tmp/foo" on the applying host. But then again, I don't
know the internals of the code...

>> The last part might be a little bit off-topic, but I think it does
>> pertain to the whole "all-nodes-are-part-of-the-system" thinking
>> that is the motivation for Catalog storage/queries.
>
> Yeah, that's a good point - one of the big goals here is to lose the
> 'nodes sit alone' perspective and really give them membership of part
> of a larger whole.

Combining external node classification, fact storage and offline-compile
capability mentally really made that idea click for me. It leads to a
mental model which contains a single step from definition to Catalog for
the whole system as opposed to a Catalog for a single node.

The last missing piece would be a puppetrun orchestrator who could take
this System-wide Catalog, toposort it and run it on the nodes as
necessary. Does anyone else see the connection to parallelizing and
grouping resource application in puppetd?

Luke Kanies

unread,
Sep 20, 2010, 11:11:25 AM9/20/10
to puppe...@googlegroups.com
On Sep 20, 2010, at 7:15, David Schmitt <da...@dasz.at> wrote:

> Combining external node classification, fact storage and offline-compile capability mentally really made that idea click for me. It leads to a mental model which contains a single step from definition to Catalog for the whole system as opposed to a Catalog for a single node.
>
> The last missing piece would be a puppetrun orchestrator who could take this System-wide Catalog, toposort it and run it on the nodes as necessary. Does anyone else see the connection to parallelizing and grouping resource application in puppetd?

Well, I certainly do... That's kind of the main point. :)

--
Luke Kanies | +1-615-594-8199

Luke Kanies

unread,
Sep 21, 2010, 1:49:40 PM9/21/10
to puppe...@googlegroups.com
On Sep 20, 2010, at 7:15 AM, David Schmitt wrote:

> On 9/17/2010 11:03 PM, Luke Kanies wrote:

> [...]


>>> node A { y { 1: } } node B { x { 1: } }
>>>
>>> define y() { $next = $name+1 @@x { $next: } Y<<||>> }
>>>
>>> define x() { $next = $name+1 @@y { $next: } X<<||>> }
>>>
>>> If I'm not completely off, this will create lots and lots of
>>> resources as A and B are evaluated alternatively.
>>
>> This might quite possible destroy the universe if resource collection
>> didn't ignore resources exported by the compiling host. Given that
>> they do, though, you'd likely just get flapping and some very pissed
>> coworkers.
>
> I don't think so:
>
> @@file{"/tmp/foo": ensure=>present; }
> File<<||>>
>
> will create a "/tmp/foo" on the applying host. But then again, I don't know the internals of the code...

That is true, but if you change the catalog to then have /tmp/bar created on the host, /tmp/foo will no longer be created.

That is, one compile for a host cannot affect the next compile (because a host doesn't pull its own resources from the db, only those of other hosts), thus your cascade of resources can't happen.

>>> The last part might be a little bit off-topic, but I think it does
>>> pertain to the whole "all-nodes-are-part-of-the-system" thinking
>>> that is the motivation for Catalog storage/queries.
>>
>> Yeah, that's a good point - one of the big goals here is to lose the
>> 'nodes sit alone' perspective and really give them membership of part
>> of a larger whole.
>
> Combining external node classification, fact storage and offline-compile capability mentally really made that idea click for me. It leads to a mental model which contains a single step from definition to Catalog for the whole system as opposed to a Catalog for a single node.

I don't think you're quite at the single step phase -- you're probably going to want to incrementally compile the catalogs still -- but yeah, you're a lot closer. And you're certainly getting close to that point of being able to say you have one catalog which includes all other host catalogs, rather than a bunch of catalogs. :)

> The last missing piece would be a puppetrun orchestrator who could take this System-wide Catalog, toposort it and run it on the nodes as necessary. Does anyone else see the connection to parallelizing and grouping resource application in puppetd?

Yes, that is another (although far from the last, IMO) piece in the puzzle, although I'd actually split it into two - a compiler process (or pool of processes, most likely), and then a separate system that notifies individual hosts that they should pull a new catalog and tracks who's checked in and such.

But yeah, that's the general idea.

--
You can't have everything. Where would you put it?
-- Stephen Wright

Jeff McCune

unread,
Sep 21, 2010, 2:33:13 PM9/21/10
to puppe...@googlegroups.com
On Wed, Sep 15, 2010 at 4:26 PM, Luke Kanies <lu...@puppetlabs.com> wrote:
[snip]

> Comments appreciated, and ideas for how you would use it appreciated even more.

After reading through a couple of times, I'm not compelled to comment
on any specific aspects so I think it looks good.

I'll keep an eye out for specific use cases and reply to the list for
comment as they arise.

--
Jeff McCune
http://www.puppetlabs.com/

Luke Kanies

unread,
Oct 1, 2010, 7:06:46 PM10/1/10
to puppe...@googlegroups.com
On Sep 17, 2010, at 1:41 AM, David Schmitt wrote:

> On 9/16/2010 1:26 AM, Luke Kanies wrote:
>> Hi all,
>>
>> I've just stuck my proposal for a Catalog Service (which we've been
>> bandying about internally for a while, and which I've been thinking
>> about even longer) on the wiki:
>>
>> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>
> Interesting read :-) here're a few notes:
>
> * document needs list of proposed functional changes, afaict:
> * insert RESTful API between puppetmaster and Catalog storage,
> thereby exposing a proper interface
> * decouple compilation and catalog serving completely
> * btw, using futures, one could compile a "template" catalog
> and only insert the changing fact values quickly?
> * enrichen search API to cover all resources and complex queries
> * implement additional backends
> * simple, no external dependencies
> * massively scalable, using some nosql solution

[...]

I've updated the document with these notes and Brice's. Would you prefer I post the whole doc here, or just rely on people checking the original out?

http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture

--
No matter how rich you become, how famous or powerful, when you die
the size of your funeral will still pretty much depend on the
weather. -- Michael Pritchard

David Schmitt

unread,
Oct 4, 2010, 6:37:43 AM10/4/10
to puppe...@googlegroups.com
On 10/2/2010 1:06 AM, Luke Kanies wrote:
> On Sep 17, 2010, at 1:41 AM, David Schmitt wrote:
>
>> On 9/16/2010 1:26 AM, Luke Kanies wrote:

> I've updated the document with these notes and Brice's.Would you


> prefer I post the whole doc here, or just rely on people checking the
> original out?
>
> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture

It's not like there were a ton of changes, but mostly clarifications.

I'm wondering how this proposal will morph under your current bus/queue
focus.

Luke Kanies

unread,
Oct 5, 2010, 8:06:25 AM10/5/10
to puppe...@googlegroups.com
On Oct 4, 2010, at 3:37, David Schmitt <da...@dasz.at> wrote:

> On 10/2/2010 1:06 AM, Luke Kanies wrote:
>> On Sep 17, 2010, at 1:41 AM, David Schmitt wrote:
>>
>>> On 9/16/2010 1:26 AM, Luke Kanies wrote:
>
>> I've updated the document with these notes and Brice's.Would you
>> prefer I post the whole doc here, or just rely on people checking the
>> original out?
>>
>> http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
>
> It's not like there were a ton of changes, but mostly clarifications.
>
> I'm wondering how this proposal will morph under your current bus/queue focus.

I'm wondering that, too. :)

At this point, it seems to be mostly a configuration problem - how you
specify who gets a copy of the catalog. The basics are the same, in
that you'll still have a db with a copy of all of the catalogs, but
most likely the bus will handle the duplication, rather than the
master.

--
Luke Kanies | +1-615-594-8199

Reply all
Reply to author
Forward
0 new messages