Announce: PuppetDB 1.1.0-rc4 now available

74 views
Skip to first unread message

Moses Mendoza

unread,
Jan 8, 2013, 8:30:12 PM1/8/13
to puppet...@googlegroups.com, puppe...@googlegroups.com
PuppetDB 1.1.0-rc4 is now available for download! This is a feature and bug-fix release candidate of PuppetDB

Note: Release candidates 1-3 were never pushed to repositories due issues in packaging that were discovered prior to release. RC-4 is the first packaged release candidate for PuppetDB 1.1.0.

# Downloads
==============
Available in native package format in the pre-release repositories at:
http://yum.puppetlabs.com and http://apt.puppetlabs.com

For information on how to enable the Puppet Labs pre-release repos, see:

Puppet module:
http://forge.puppetlabs.com/puppetlabs/puppetdb

Source (same license as Puppet): http://github.com/puppetlabs/puppetdb/

Available for use with Puppet Enterprise 2.5.3 and later at
http://yum-enterprise.puppetlabs.com/ and http://apt-enterprise.puppetlabs.com/

# Documentation (including how to install): http://docs.puppetlabs.com/puppetdb

# Issues can be filed at:
http://projects.puppetlabs.com/projects/puppetdb/issues

# See our development board on Trello:
http://links.puppetlabs.com/puppetdb-trello

# Changelog

1.1.0-rc4
=========

Many thanks to the following people who contributed patches to this
release:

* Chris Price
* Deepak Giridharagopal
* Jeff Blaine
* Ken Barber
* Kushal Pisavadia
* Matthaus Litteken
* Michael Stahnke
* Moses Mendoza
* Nick Lewis
* Pierre-Yves Ritschard

Notable features:

* Enhanced query API

  A substantially improved version 2 of the HTTP query API has been added. This
  is located under the /v2 route. Detailed documentation on all the available
  routes and query language can be found in the API documentation, but here are
  a few of the noteworthy improvements:

  * Query based on regular expressions

    Regular expressions are now supported against most fields when querying
    against resources, facts, and nodes, using the ~ operator. This makes it
    easy to, for instance, find *all* IP addresses for a node, or apply a query
    to some set of nodes.

  * More node information

    Queries against the /v2/nodes endpoint now return objects, rather than
    simply a list of node names. These are effectively the same as what was
    previously returned by the /status endpoint, containing the node name, its
    deactivation time, as well as the timestamps of its latest catalog, facts,
    and report.

  * Full fact query

    The /v2/facts endpoint supports the same type of query language available
    when querying resources, where previously it could only be used to retrieve
    the set of facts for a given node. This makes it easy to find the value of
    some fact for all nodes, or to do more complex queries.

  * Subqueries

    Queries can now contain subqueries through the `select-resources` and
    `select-facts` operators. These operators perform queries equivalent to
    using the /v2/resources and /v2/facts routes, respectively. The information
    returned from them can then be correlated, to perform complex queries such
    as "fetch the IP address of all nodes with Class[apache]", or "fetch the
    operatingsystemrelease of all Debian nodes". These operators can also be
    nested and correlated on any field, to answer virtually any question in a
    single query.

  * Friendlier, RESTful query routes

    In addition to the standard query language, there are also now more
    friendly, "RESTful" query routes. For instance, /v2/nodes/foo.example.com
    will return information about the node foo.example.com. Similarly,
    /v2/facts/operatingsystem will return the operatingsystem of every node, or
    /v2/nodes/foo.example.com/operatingsystem can be used to just find the
    operatingsystem of foo.example.com.

    The same sort of routes are available for resources as well.
    /v2/resources/User will return every User resource, /v2/resources/User/joe
    will return every instance of the User[joe] resource, and
    /v2/nodes/foo.example.com/Package will return every Package resource on
    foo.example.com. These routes can also have a query parameter supplied, to
    further query against their results, as with the standard query API.

* Improved catalog storage performance

   Some improvements have been made to the way catalog hashes are computed for
   deduplication, resulting in somewhat faster catalog storage, and a
   significant decrease in the amount of time taken to store the first catalog
   received after startup.

* Experimental report submission and storage

  The 'puppetdb' report processor is now available, which can be used
  (alongside any other reports) to submit reports to PuppetDB for storage. This
  feature is considered experimental, which means the query API may change
  significantly in the future. The ability to query reports is currently
  limited and experimental, meaning it is accessed via /experimental/reports
  rather than /v2/reports. Currently it is possible to get a list of reports
  for a node, and to retrieve the contents of a single report. More advanced
  querying (and integration with other query endpoints) will come in a future
  release.

  Unlike catalogs, reports are retained for a fixed time period (defaulting to
  7 days), rather than only the most recent report being stored. This means
  more data is available than just the latest, but also prevents the database
  from growing unbounded. See the documentation for information on how to
  configure the storage duration.

* Tweakable settings for database connection and ActiveMQ storage

  It is now possible to set the timeout for an idle database connection to be
  terminated, as well as the keep alive interval for the connection, through
  the `conn-max-age` and `conn-keep-alive` settings.

  The settings `store-usage` and `temp-usage` can be used to set the amount of
  disk space (in MB) for ActiveMQ to use for permanent and temporary message
  storage. The main use for these settings is to lower the usage from the
  default of 100GB and 50GB respectively, as ActiveMQ will issue a warning if
  that amount of space is not available.

Behavior changes:
  * Messages received after a node is deactivated will be processed

    Previously, commands which were initially received before a node was
    deactivated, but not processed until after (for instance, because the first
    attempt to process the command failed, and the node was deactivated before
    the command was retried) were ignored and the node was left deactivated.
    For example, if a new catalog were submitted, but couldn't be processed
    because the database was temporarily down, and the node was deactivated
    before the catalog was retried, the catalog would be dropped. Now the
    catalog will be stored, though the node will stay deactivated. Commands
    *received* after a node is deactivated will continue to reactivate the node
    as before.

Moses Mendoza

unread,
Jan 10, 2013, 1:20:16 PM1/10/13
to puppet...@googlegroups.com, pupp...@puppetlabs.com
On Thu, Jan 10, 2013 at 7:27 AM, Eric Kissinger
<eric.ki...@gmail.com> wrote:
>
>
> The puppetdb-1.1.0-0.1rc4.el6.noarch.rpm package seems to have an error when unpacking the puppetdb.jar.
>
> This is the output:
>
>
> Downloading Packages:
> puppetdb-1.1.0-0.1rc4.el6.noarch.rpm | 14 MB 00:00
> Running rpm_check_debug
> Running Transaction Test
> Transaction Test Succeeded
> Running Transaction
> Updating : puppetdb-1.1.0-0.1rc4.el6.noarch 1/2
> Error unpacking rpm package puppetdb-1.1.0-0.1rc4.el6.noarch
> warning: /etc/puppetdb/conf.d/config.ini created as /etc/puppetdb/conf.d/config.ini.rpmnew
> error: unpacking of archive failed on file /usr/share/puppetdb/puppetdb.jar;50eed3a6: cpio: read

Hi Eric,

I'm having a hard time replicating your issue. On my centos 6 vm, I
can do a clean install and also upgrade from puppetdb 1.0.5 without
issue. Also of note, your log shows a file size of 14 MB, but
puppetdb is more like 16 MB, which perhaps could indicate some
corruption somewhere. E.g., from my install output:

Downloading Packages:
Setting up and reading Presto delta metadata
Processing delta metadata
Package(s) data still to download: 16 M
puppetdb-1.1.0-0.1rc4.el6.noarch.rpm
| 16 MB 00:30
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : puppetdb-1.1.0-0.1rc4.el6.noarch
1/1
Certificate was added to keystore
Verifying : puppetdb-1.1.0-0.1rc4.el6.noarch
1/1

Installed:
puppetdb.noarch 0:1.1.0-0.1rc4.el6

Complete!

Perhaps something went awry during your repo metadata gathering? Maybe
try `yum clean metadata`? and installing again? In case its a
packaging issue, what version of puppetdb are you upgrading from, and
what OS are you installing onto?

Eric Kissinger

unread,
Jan 15, 2013, 9:08:21 AM1/15/13
to puppet...@googlegroups.com, pupp...@puppetlabs.com


must have been a problem with my download of rc4.   rc5 worked without issue

-Thanks

GRANIER Bernard (MORPHO)

unread,
Jan 15, 2013, 9:23:58 AM1/15/13
to puppet...@googlegroups.com

Hi,

 

I try to use Maven to create module packaged in a tar.gz file.

 

But with the geppeto plugin, the file metadata.json is managed automatically, how it is done ?

 

I would like to generate the metadata.json before plugin-assembly execution.

 

Cordialement,

 

Bernard Granier

CE Plateforme Système

bernard...@morpho.com

01 58 11 32 51

 

#
" This e-mail and any attached documents may contain confidential or proprietary information. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system."
#

Reply all
Reply to author
Forward
0 new messages