help with state that iterates over a pillar dictionary

2,133 views
Skip to first unread message

schlag

unread,
Oct 17, 2013, 1:47:56 PM10/17/13
to salt-...@googlegroups.com
Hi,

I was wondering if i could get some help with a state that uses a jinja iteritems operation for iterating through a set of pillar dictionary values.   


This is my pillar dictionary:  http://pastebin.com/gFSHiv2h   
--snip
  1. ssh_configs:
  2.   postgres:
  3.     file:  /home/postgres/.ssh/config
  4.     mode:  600
  5.     uid:   9999
  6.     gid:   9999
  7.     content: |
  8.       ###################### NOTE #########################
  9.       # This file is managed by salt, do not edit locally #
  10.       # All changes will be overwritten by salt!          #
  11.       ###################### NOTE #########################
  12.       StrictHostKeyChecking no
  13.       UserKnownHostsFile /dev/null
--snip


and this is my state:
--snip
  1. {% if pillar['ssh_configs'] is defined %}
  2.   {% for user, userargs in pillar.get('ssh_configs').iteritems() %}
  3. {{ userargs['file'] }}:
  4.   file.managed:
  5.     - user:     {{ userargs['uid'] }}
  6.     - group:    {{ userargs['gid'] }}
  7.     - mode:     {{ userargs['mode'] }}
  8.     - makedirs: True
  9.     - contents: |
  10.         {{ userargs['content'] }}
  11.   {% endfor %}
  12. {% endif %}

--snip


I've verified that the minion gets the pillar dictionary:

--snip
  1. salt-call pillar.get ssh_configs
  2. postgres:
  3.     ----------
  4.     content:
  5.         ###################### NOTE #########################
  6.         # This file is managed by salt, do not edit locally #
  7.         # All changes will be overwritten by salt!          #
  8.         ###################### NOTE #########################
  9.         StrictHostKeyChecking no
  10.         UserKnownHostsFile /dev/null
  11.        
  12.     file:
  13.         /home/postgres/.ssh/config
  14.     gid:
  15.         9999
  16.     mode:
  17.         600
  18.     uid:
  19.         9999
--snip

.. but when i try applying the state, i get a UndefinedError: 'str object' has no attribute 'iteritems'  error. 

--snip
    1. salt-call -l debug state.sls environments.all.ssh test=True
    2. [DEBUG   ] Reading configuration from /etc/salt/minion
    3. [DEBUG   ] Configuration file path: /etc/salt/minion
    4. [DEBUG   ] Reading configuration from /etc/salt/minion
    5. [DEBUG   ] loading grain in ['/var/cache/salt/minion/extmods/grains', '/usr/lib/python2.7/dist-packages/salt/grains']
    6. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/grains, it is not a directory
    7. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    8. [DEBUG   ] Decrypting the current master AES key
    9. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    10. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    11. [DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
    12. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
    13. [DEBUG   ] Loaded localemod as virtual locale
    14. [DEBUG   ] Loaded groupadd as virtual group
    15. [DEBUG   ] Loaded linux_sysctl as virtual sysctl
    16. [DEBUG   ] Loaded sysmod as virtual sys
    17. [DEBUG   ] Loaded parted as virtual partition
    18. [DEBUG   ] Loaded apt as virtual pkg
    19. [DEBUG   ] Loaded debian_service as virtual service
    20. [DEBUG   ] Loaded useradd as virtual user
    21. [DEBUG   ] Loaded dpkg as virtual lowpkg
    22. [DEBUG   ] Loaded debconfmod as virtual debconf
    23. [DEBUG   ] Loaded djangomod as virtual django
    24. [DEBUG   ] Loaded cmdmod as virtual cmd
    25. [DEBUG   ] Loaded linux_lvm as virtual lvm
    26. [DEBUG   ] loading returner in ['/var/cache/salt/minion/extmods/returners', '/usr/lib/python2.7/dist-packages/salt/returners']
    27. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/returners, it is not a directory
    28. [DEBUG   ] Loaded syslog_return as virtual syslog
    29. [DEBUG   ] Loaded carbon_return as virtual carbon
    30. [DEBUG   ] loading states in ['/var/cache/salt/minion/extmods/states', '/usr/lib/python2.7/dist-packages/salt/states']
    31. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/states, it is not a directory
    32. [DEBUG   ] Loaded debconfmod as virtual debconf
    33. [DEBUG   ] loading render in ['/var/cache/salt/minion/extmods/renderers', '/usr/lib/python2.7/dist-packages/salt/renderers']
    34. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/renderers, it is not a directory
    35. [INFO    ] Executing command 'ps -efH' in directory '/root'
    36. [DEBUG   ] output: UID        PID  PPID  C STIME TTY          TIME CMD
    37. root         2     0  0 09:45 ?        00:00:00 [kthreadd]
    38. root         3     2  0 09:45 ?        00:00:00   [ksoftirqd/0]
    39. root         5     2  0 09:45 ?        00:00:00   [kworker/u:0]
    40. root         6     2  0 09:45 ?        00:00:00   [migration/0]
    41. root         7     2  0 09:45 ?        00:00:01   [watchdog/0]
    42. root         8     2  0 09:45 ?        00:00:00   [cpuset]
    43. root         9     2  0 09:45 ?        00:00:00   [khelper]
    44. root        10     2  0 09:45 ?        00:00:00   [kdevtmpfs]
    45. root        11     2  0 09:45 ?        00:00:00   [netns]
    46. root        12     2  0 09:45 ?        00:00:00   [sync_supers]
    47. root        13     2  0 09:45 ?        00:00:00   [bdi-default]
    48. root        14     2  0 09:45 ?        00:00:00   [kintegrityd]
    49. root        15     2  0 09:45 ?        00:00:00   [kblockd]
    50. root        16     2  0 09:45 ?        00:00:00   [khungtaskd]
    51. root        17     2  0 09:45 ?        00:00:00   [kswapd0]
    52. root        18     2  0 09:45 ?        00:00:00   [ksmd]
    53. root        19     2  0 09:45 ?        00:00:00   [khugepaged]
    54. root        20     2  0 09:45 ?        00:00:00   [fsnotify_mark]
    55. root        21     2  0 09:45 ?        00:00:00   [crypto]
    56. root        24     2  0 09:45 ?        00:00:03   [kworker/0:1]
    57. root        78     2  0 09:45 ?        00:00:00   [ata_sff]
    58. root       103     2  0 09:45 ?        00:00:00   [mpt_poll_0]
    59. root       113     2  0 09:45 ?        00:00:00   [mpt/0]
    60. root       134     2  0 09:45 ?        00:00:00   [scsi_eh_0]
    61. root       136     2  0 09:45 ?        00:00:00   [scsi_eh_1]
    62. root       137     2  0 09:45 ?        00:00:00   [scsi_eh_2]
    63. root       138     2  0 09:45 ?        00:00:00   [kworker/u:1]
    64. root       175     2  0 09:45 ?        00:00:00   [kdmflush]
    65. root       183     2  0 09:45 ?        00:00:00   [kdmflush]
    66. root       198     2  0 09:45 ?        00:00:00   [jbd2/dm-0-8]
    67. root       199     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    68. root       459     2  0 09:45 ?        00:00:00   [kpsmoused]
    69. root       508     2  0 09:45 ?        00:00:00   [ttm_swap]
    70. root       647     2  0 09:45 ?        00:00:00   [kdmflush]
    71. root       649     2  0 09:45 ?        00:00:00   [kdmflush]
    72. root       651     2  0 09:45 ?        00:00:00   [kdmflush]
    73. root       653     2  0 09:45 ?        00:00:00   [kdmflush]
    74. root       655     2  0 09:45 ?        00:00:00   [kdmflush]
    75. root      1298     2  0 09:45 ?        00:00:00   [jbd2/sda1-8]
    76. root      1299     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    77. root      1300     2  0 09:45 ?        00:00:00   [jbd2/dm-6-8]
    78. root      1301     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    79. root      1302     2  0 09:45 ?        00:00:00   [jbd2/dm-5-8]
    80. root      1303     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    81. root      1304     2  0 09:45 ?        00:00:00   [jbd2/dm-2-8]
    82. root      1305     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    83. root      1306     2  0 09:45 ?        00:00:00   [jbd2/dm-3-8]
    84. root      1307     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    85. root      1308     2  0 09:45 ?        00:00:00   [jbd2/dm-4-8]
    86. root      1309     2  0 09:45 ?        00:00:00   [ext4-dio-unwrit]
    87. root      1607     2  0 09:46 ?        00:00:00   [rpciod]
    88. root      1609     2  0 09:46 ?        00:00:00   [nfsiod]
    89. root      1786     2  0 09:46 ?        00:00:00   [lockd]
    90. root      1948     2  0 09:46 ?        00:00:00   [flush-254:2]
    91. root      1950     2  0 09:46 ?        00:00:00   [flush-254:4]
    92. root     22637     2  0 12:48 ?        00:00:00   [xfs_mru_cache]
    93. root     22638     2  0 12:48 ?        00:00:00   [xfslogd]
    94. root     22639     2  0 12:48 ?        00:00:00   [xfsdatad]
    95. root     22640     2  0 12:48 ?        00:00:00   [xfsconvertd]
    96. root     22643     2  0 12:48 ?        00:00:00   [jfsIO]
    97. root     22644     2  0 12:48 ?        00:00:00   [jfsCommit]
    98. root     22645     2  0 12:48 ?        00:00:00   [jfsSync]
    99. root     23281     2  0 12:48 ?        00:00:00   [kworker/0:0]
    100. root         1     0  0 09:45 ?        00:00:03 init [2]        
    101. root       332     1  0 09:45 ?        00:00:00   udevd --daemon
    102. root     22619   332  0 12:48 ?        00:00:00     udevd --daemon
    103. root      1571     1  0 09:46 ?        00:00:00   /sbin/rpcbind -w
    104. statd     1602     1  0 09:46 ?        00:00:00   /sbin/rpc.statd
    105. root      1616     1  0 09:46 ?        00:00:00   /usr/sbin/rpc.idmapd
    106. root      2030     1  0 09:46 ?        00:00:00   /opt/pbis/sbin/lwsmd --start-as-daemon
    107. root      2038  2030  0 09:46 ?        00:00:00     lw-container lwreg
    108. root      2056  2030  0 09:46 ?        00:00:00     lw-container eventlog
    109. root      2067  2030  0 09:46 ?        00:00:00     lw-container netlogon
    110. root      2077  2030  0 09:46 ?        00:00:00     lw-container lwio
    111. root      2090  2030  0 09:46 ?        00:00:11     lw-container lsass
    112. root      2107  2030  0 09:46 ?        00:00:00     lw-container reapsysl
    113. newrelic  2046     1  0 09:46 ?        00:00:00   /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/nrsysmond.pid
    114. newrelic  2047  2046  0 09:46 ?        00:00:06     /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/nrsysmond.pid
    115. root      2201     1  0 09:46 ?        00:00:00   /usr/sbin/acpid
    116. daemon    2226     1  0 09:46 ?        00:00:00   /usr/sbin/atd
    117. root      2268     1  0 09:46 ?        00:00:00   /usr/sbin/cron
    118. ntp       2297     1  0 09:46 ?        00:00:00   /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:109
    119. root      2410     1  0 09:46 ?        00:00:00   /usr/lib/postfix/master
    120. postfix   2432  2410  0 09:46 ?        00:00:00     qmgr -l -t fifo -u
    121. postfix  28006  2410  0 13:06 ?        00:00:00     pickup -l -t fifo -u -c
    122. root      2471     1  0 09:46 ?        00:00:00   /usr/sbin/sshd
    123. root      3728  2471  0 11:05 ?        00:00:00     sshd: user1 [priv]
    124. user1    3740  3728  0 11:05 ?        00:00:00       sshd: user1@pts/0
    125. user1    3741  3740  0 11:05 pts/0    00:00:03         -bash
    126. root      3829  3741  0 11:06 pts/0    00:00:00           sudo su
    127. root      3830  3829  0 11:06 pts/0    00:00:00             su
    128. root      3831  3830  0 11:06 pts/0    00:00:00               bash
    129. root     28352  3831 92 13:37 pts/0    00:00:07                 /usr/bin/python /usr/bin/salt-call -l debug state.sls environments.all.ssh test=True
    130. root     28378 28352  0 13:37 pts/0    00:00:00                   ps -efH
    131. root      2498     1  0 09:46 ?        00:00:01   /usr/bin/python /usr/bin/supervisord
    132. root      2741     1  0 09:46 ?        00:00:06   /usr/sbin/vmtoolsd
    133. root      2764     1  0 09:46 tty1     00:00:00   /sbin/getty 38400 tty1
    134. root      2765     1  0 09:46 tty2     00:00:00   /sbin/getty 38400 tty2
    135. root      2766     1  0 09:46 tty3     00:00:00   /sbin/getty 38400 tty3
    136. root      2767     1  0 09:46 tty4     00:00:00   /sbin/getty 38400 tty4
    137. root      2768     1  0 09:46 tty5     00:00:00   /sbin/getty 38400 tty5
    138. root      2769     1  0 09:46 tty6     00:00:00   /sbin/getty 38400 tty6
    139. postgres  6689     1  0 12:15 ?        00:00:00   /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgresql/9.2/main/postgresql.conf
    140. postgres  6691  6689  0 12:15 ?        00:00:00     postgres: checkpointer process                                                                                              
    141. postgres  6692  6689  0 12:15 ?        00:00:00     postgres: writer process                                                                                                    
    142. postgres  6693  6689  0 12:15 ?        00:00:00     postgres: wal writer process                                                                                                
    143. postgres  6694  6689  0 12:15 ?        00:00:00     postgres: autovacuum launcher process                                                                                      
    144. postgres  6695  6689  0 12:15 ?        00:00:00     postgres: stats collector process                                                                                          
    145. postgres 10938     1  0 12:16 ?        00:00:00   /usr/sbin/pgpool -n
    146. postgres 11065 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    147. postgres 11066 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    148. postgres 11067 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    149. postgres 11068 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    150. postgres 11069 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    151. postgres 11070 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    152. postgres 11071 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    153. postgres 11072 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    154. postgres 11073 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    155. postgres 11074 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    156. postgres 11075 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    157. postgres 11076 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    158. postgres 11077 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    159. postgres 11078 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    160. postgres 11079 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    161. postgres 11080 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    162. postgres 11081 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    163. postgres 11082 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    164. postgres 11083 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    165. postgres 11084 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    166. postgres 11085 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    167. postgres 11086 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    168. postgres 11087 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    169. postgres 11088 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    170. postgres 11089 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    171. postgres 11090 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    172. postgres 11091 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    173. postgres 11092 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    174. postgres 11093 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    175. postgres 11094 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    176. postgres 11095 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    177. postgres 11096 10938  0 12:17 ?        00:00:00     pgpool: wait for connection request
    178. postgres 11097 10938  0 12:17 ?        00:00:00     pgpool: PCP: wait for connection request
    179. postgres 11098 10938  0 12:17 ?        00:00:00     pgpool: worker process
    180. postgres 10939     1  0 12:16 ?        00:00:00   logger -t pgpool -p local0.info
    181. root     15770     1  0 12:40 ?        00:00:00   /usr/sbin/rsyslogd -c5
    182. root     23169     1  0 12:48 ?        00:00:01   /usr/bin/python /usr/bin/salt-minion -d
    183. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    184. [DEBUG   ] Decrypting the current master AES key
    185. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    186. [DEBUG   ] Reading configuration from /etc/salt/minion
    187. [DEBUG   ] loading grain in ['/var/cache/salt/minion/extmods/grains', '/usr/lib/python2.7/dist-packages/salt/grains']
    188. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/grains, it is not a directory
    189. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    190. [DEBUG   ] Decrypting the current master AES key
    191. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    192. [DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
    193. [INFO    ] Loading fresh modules for state activity
    194. [DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
    195. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
    196. [DEBUG   ] Loaded localemod as virtual locale
    197. [DEBUG   ] Loaded groupadd as virtual group
    198. [DEBUG   ] Loaded linux_sysctl as virtual sysctl
    199. [DEBUG   ] Loaded sysmod as virtual sys
    200. [DEBUG   ] Loaded parted as virtual partition
    201. [DEBUG   ] Loaded apt as virtual pkg
    202. [DEBUG   ] Loaded debian_service as virtual service
    203. [DEBUG   ] Loaded useradd as virtual user
    204. [DEBUG   ] Loaded dpkg as virtual lowpkg
    205. [DEBUG   ] Loaded debconfmod as virtual debconf
    206. [DEBUG   ] Loaded djangomod as virtual django
    207. [DEBUG   ] Loaded cmdmod as virtual cmd
    208. [DEBUG   ] Loaded linux_lvm as virtual lvm
    209. [DEBUG   ] loading states in ['/var/cache/salt/minion/extmods/states', '/usr/lib/python2.7/dist-packages/salt/states']
    210. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/states, it is not a directory
    211. [DEBUG   ] Loaded debconfmod as virtual debconf
    212. [DEBUG   ] loading render in ['/var/cache/salt/minion/extmods/renderers', '/usr/lib/python2.7/dist-packages/salt/renderers']
    213. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/renderers, it is not a directory
    214. [DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
    215. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
    216. [DEBUG   ] Loaded localemod as virtual locale
    217. [DEBUG   ] Loaded groupadd as virtual group
    218. [DEBUG   ] Loaded linux_sysctl as virtual sysctl
    219. [DEBUG   ] Loaded sysmod as virtual sys
    220. [DEBUG   ] Loaded parted as virtual partition
    221. [DEBUG   ] Loaded apt as virtual pkg
    222. [DEBUG   ] Loaded debian_service as virtual service
    223. [DEBUG   ] Loaded useradd as virtual user
    224. [DEBUG   ] Loaded dpkg as virtual lowpkg
    225. [DEBUG   ] Loaded debconfmod as virtual debconf
    226. [DEBUG   ] Loaded djangomod as virtual django
    227. [DEBUG   ] Loaded cmdmod as virtual cmd
    228. [DEBUG   ] Loaded linux_lvm as virtual lvm
    229. [INFO    ] Fetching file 'salt://environments/all/ssh.sls'
    230. [INFO    ] Fetching file 'salt://environments/all/ssh/init.sls'
    231. [DEBUG   ] Jinja search path: '['/var/cache/salt/minion/files/base']'
    232. [ERROR   ] Rendering SLS environments.all.ssh failed, render error: Traceback (most recent call last):
    233.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 63, in render_tmpl
    234.     output = render_str(tmplstr, context, tmplpath)
    235.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 116, in render_jinja_tmpl
    236.     output = jinja_env.from_string(tmplstr).render(**context)
    237.   File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 894, in render
    238.     return self.environment.handle_exception(exc_info, True)
    239.   File "<template>", line 22, in top-level template code
    240. UndefinedError: 'str object' has no attribute 'iteritems'
    241. Traceback (most recent call last):
    242.   File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1904, in render_state
    243.     rendered_sls=mods
    244.   File "/usr/lib/python2.7/dist-packages/salt/template.py", line 68, in compile_template
    245.     ret = render(input_data, env, sls, **render_kwargs)
    246.   File "/usr/lib/python2.7/dist-packages/salt/renderers/jinja.py", line 41, in render
    247.     tmp_data.get('data', 'Unknown render error in jinja renderer')
    248. SaltRenderError: Traceback (most recent call last):
    249.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 63, in render_tmpl
    250.     output = render_str(tmplstr, context, tmplpath)
    251.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 116, in render_jinja_tmpl
    252.     output = jinja_env.from_string(tmplstr).render(**context)
    253.   File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 894, in render
    254.     return self.environment.handle_exception(exc_info, True)
    255.   File "<template>", line 22, in top-level template code
    256. UndefinedError: 'str object' has no attribute 'iteritems'
    257.  
    258. [DEBUG   ] loading output in ['/var/cache/salt/minion/extmods/output', '/usr/lib/python2.7/dist-packages/salt/output']
    259. [DEBUG   ] Skipping /var/cache/salt/minion/extmods/output, it is not a directory
    260. [DEBUG   ] Loaded no_out as virtual quiet
    261. [DEBUG   ] Loaded json_out as virtual json
    262. [DEBUG   ] Loaded yaml_out as virtual yaml
    263. [DEBUG   ] Loaded pprint_out as virtual pprint
    264. local:
    265.     Data failed to compile:
    266. ----------
    267.     Rendering SLS environments.all.ssh failed, render error: Traceback (most recent call last):
    268.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 63, in render_tmpl
    269.     output = render_str(tmplstr, context, tmplpath)
    270.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 116, in render_jinja_tmpl
    271.     output = jinja_env.from_string(tmplstr).render(**context)
    272.   File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 894, in render
    273.     return self.environment.handle_exception(exc_info, True)
    274.   File "<template>", line 22, in top-level template code
    275. UndefinedError: 'str object' has no attribute 'iteritems'
    276.  
    277. Traceback (most recent call last):
    278.   File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1904, in render_state
    279.     rendered_sls=mods
    280.   File "/usr/lib/python2.7/dist-packages/salt/template.py", line 68, in compile_template
    281.     ret = render(input_data, env, sls, **render_kwargs)
    282.   File "/usr/lib/python2.7/dist-packages/salt/renderers/jinja.py", line 41, in render
    283.     tmp_data.get('data', 'Unknown render error in jinja renderer')
    284. SaltRenderError: Traceback (most recent call last):
    285.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 63, in render_tmpl
    286.     output = render_str(tmplstr, context, tmplpath)
    287.   File "/usr/lib/python2.7/dist-packages/salt/utils/templates.py", line 116, in render_jinja_tmpl
    288.     output = jinja_env.from_string(tmplstr).render(**context)
    289.   File "/usr/lib/python2.7/dist-packages/jinja2/environment.py", line 894, in render
    290.     return self.environment.handle_exception(exc_info, True)
    291.   File "<template>", line 22, in top-level template code
    292. UndefinedError: 'str object' has no attribute 'iteritems'
--snip


Master:
salt --versions-report
           Salt: 0.16.4
         Python: 2.6.6 (r266:84292, Dec 26 2010, 22:31:48)
         Jinja2: 2.5.5
       M2Crypto: 0.20.1
 msgpack-python: 0.1.10
   msgpack-pure: Not Installed
       pycrypto: 2.1.0
         PyYAML: 3.09
          PyZMQ: 13.1.0
            ZMQ: 3.2.3


Minion:
salt-call --versions-report
           Salt: 0.16.4
         Python: 2.7.3 (default, Jan  2 2013, 13:56:14)
         Jinja2: 2.6
       M2Crypto: 0.21.1
 msgpack-python: 0.1.10
   msgpack-pure: Not Installed
       pycrypto: 2.6
         PyYAML: 3.10
          PyZMQ: 13.1.0
            ZMQ: 3.2.3


Does anyone have any ideas?  I use this methodology quite frequently, but I am unable to figure out what I'm doing wrong.  Thanks in advance!

Mrten

unread,
Oct 17, 2013, 2:00:18 PM10/17/13
to salt-...@googlegroups.com
On 17/10/2013 19:47 , schlag wrote:
> Hi,
>
> I was wondering if i could get some help with a state that uses a jinja
> iteritems operation for iterating through a set of pillar dictionary
> values.
>
>
> This is my pillar dictionary: http://pastebin.com/gFSHiv2h

> Does anyone have any ideas? I use this methodology quite frequently,
> but I am unable to figure out what I'm doing wrong. Thanks in advance!

Try

pillar['ssh_configs'].iter_items() instead of the get() construct?


Also, I regularly paste my pillars and states into the online YAML
parser to see if they are what I intend them to be.

http://yaml-online-parser.appspot.com/

M.

Ethan Erchinger

unread,
Oct 17, 2013, 2:30:20 PM10/17/13
to salt-...@googlegroups.com
The error seems to reference line #22, and your iteration code is on line 2, or is that just a copy/paste issue?  It seems like the pillar data isn't being seen in the state, or the state that's being executed isn't the one you think it is?

Your access pattern of using pillar.get()...iteritems() should work just fine.  But this is definitely repeatable if that value of ssh_configs were just a string.

What returns when running this:
salt-call pillar.get ssh_configs:postgres

Maybe even try putting this in a test state:
# {{ pillar['ssh_configs'] }}

Run in debug mode and see how the state renders.
Ethan

schlag

unread,
Oct 17, 2013, 3:32:13 PM10/17/13
to salt-...@googlegroups.com
thanks for your suggestions.   yeah, so i had some other stuff in the pillar file, but commented out with {# & #}, but that didn't seem to be actually doing the job, so i removed them entirely.    I tested the pillar content using the yaml parser as Mrten suggested (which looked good) and stopped using pillar.get, but was still getting the same errors.    I also found a "contents_pillar" option that is available as of 0.17 for file.managed which seems to deal with multiline pillar values and upgraded both my master & minion to try to use it.  i get a different error now, but it seems that the problem is an issue with the multiline pillar value (content).

slightly re-engineered state:

--snip
{% if pillar['ssh_configs'] is defined %}
  {% for file, fileargs in pillar['ssh_configs'].iteritems() %}
{{ file }}:
  file.managed:
    - template: jinja
    - user:     {{ fileargs['user'] }}
    - group:    {{ fileargs['group'] }}
    - mode:     {{ fileargs['mode'] }}
    - makedirs: True
    - contents_pillar:
        {{ fileargs['content'] }}
  {% endfor %}
{% endif %}
--snip


(NOTE: i've tried this with contents_pillar: | as well, to no avail for multiline content)


pillar:

--snip
ssh_configs:
  /home/postgres/.ssh/config:
    mode:  600
    user:  postgres
    group: postgres
    require:
      user:  postgres
    content: |
      ###################### NOTE #########################
      # This file is managed by salt, do not edit locally #
      # All changes will be overwritten by salt!          #
      ###################### NOTE #########################
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
--snip


The state gets rendered like this:

--snip
/home/postgres/.ssh/config:
  file.managed:
    - template: jinja
    - user:     postgres
    - group:    postgres
    - mode:     600
    - makedirs: True
    - contents_pillar:
        ###################### NOTE #########################
# This file is managed by salt, do not edit locally #
# All changes will be overwritten by salt!          #
###################### NOTE #########################
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
--snip

Which was resulting in this:

--snip
[ERROR   ] An un-handled exception was caught by salt's global exception handler:
TypeError: list indices must be integers, not str
Traceback (most recent call last):
  File "/usr/bin/salt-call", line 11, in <module>
    salt_call()
  File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 77, in salt_call
    client.run()
  File "/usr/lib/python2.7/dist-packages/salt/cli/__init__.py", line 303, in run
    caller.run()
  File "/usr/lib/python2.7/dist-packages/salt/cli/caller.py", line 141, in run
    self.opts)
  File "/usr/lib/python2.7/dist-packages/salt/output/__init__.py", line 30, in display_output
    display_data = get_printout(out, opts)(data).rstrip()
  File "/usr/lib/python2.7/dist-packages/salt/output/highstate.py", line 55, in output
    data[host] = _strip_clean(data[host])
  File "/usr/lib/python2.7/dist-packages/salt/output/highstate.py", line 213, in _strip_clean
    if returns[tag]['result'] and not returns[tag]['changes']:
TypeError: list indices must be integers, not str
Traceback (most recent call last):
  File "/usr/bin/salt-call", line 11, in <module>
    salt_call()
  File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 77, in salt_call
    client.run()
  File "/usr/lib/python2.7/dist-packages/salt/cli/__init__.py", line 303, in run
    caller.run()
  File "/usr/lib/python2.7/dist-packages/salt/cli/caller.py", line 141, in run
    self.opts)
  File "/usr/lib/python2.7/dist-packages/salt/output/__init__.py", line 30, in display_output
    display_data = get_printout(out, opts)(data).rstrip()
  File "/usr/lib/python2.7/dist-packages/salt/output/highstate.py", line 55, in output
    data[host] = _strip_clean(data[host])
  File "/usr/lib/python2.7/dist-packages/salt/output/highstate.py", line 213, in _strip_clean
    if returns[tag]['result'] and not returns[tag]['changes']:
TypeError: list indices must be integers, not str
--snip


At this point, i tried doing this with pillar:

--snip
ssh_configs:
  /home/postgres/.ssh/config:
    mode:  600
    user:  postgres
    group: postgres
    require:
      user:  postgres
    content:  '###################### NOTE #########################\n# This file is managed by salt, do not edit locally #\n# All changes will be overwritten by salt!          #\n###################### NOTE #########################\nStrictHostKeyChecking no\nUserKnownHostsFile /dev/null'
--snip


I quoted 'content' bc w/o doing that, it's value was being pulled from pillar as None.

eg:

--snip
salt-call pillar.get ssh_configs
local:
    ----------
    /home/postgres/.ssh/config:
        ----------
        content:
            None
        group:
            postgres
        mode:
            600
        require:
            ----------
            user:
                postgres
        user:
            postgres
--snip

with the newlines embeded & quotes, the minion sees this:

--snip
salt-call pillar.get ssh_configs
local:
    ----------
    /home/postgres/.ssh/config:
        ----------
        content:
            ###################### NOTE #########################\n# This file is managed by salt, do not edit locally #\n# All changes will be overwritten by salt!          #\n###################### NOTE #########################\nStrictHostKeyChecking no\nUserKnownHostsFile /dev/null
        group:
            postgres
        mode:
            600
        require:
            ----------
            user:
                postgres
        user:
            postgres
--snip

yaml output looks good, i think:

--snip
{
  "ssh_configs": {
    "/home/postgres/.ssh/config": {
      "content": "###################### NOTE #########################\\n# This file is managed by salt, do not edit locally #\\n# All changes will be overwritten by salt!          #\\n###################### NOTE #########################\\nStrictHostKeyChecking no\\nUserKnownHostsFile /dev/null", 
      "require": {
        "user": "postgres"
      }, 
      "group": "postgres", 
      "mode": 600, 
      "user": "postgres"
    }
  }
}
--snip


Now applying the state suggests that it works, the file is created, but it is completely empty.

--snip
salt-call state.sls environments.all.ssh -l debug
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] loading log_handlers in ['/var/cache/salt/minion/extmods/log_handlers', '/usr/lib/python2.7/dist-packages/salt/log/handlers']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/log_handlers, it is not a directory
[DEBUG   ] None of the required configuration sections, 'logstash_udp_handler' and 'logstash_zmq_handler', were found the in the configuration. Not loading the Logstash logging handlers module.
[DEBUG   ] Configuration file path: /etc/salt/minion
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] loading grain in ['/var/cache/salt/minion/extmods/grains', '/usr/lib/python2.7/dist-packages/salt/grains']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/grains, it is not a directory
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
[DEBUG   ] Loaded localemod as virtual locale
[DEBUG   ] Loaded groupadd as virtual group
[DEBUG   ] Loaded linux_sysctl as virtual sysctl
[DEBUG   ] Loaded sysmod as virtual sys
[DEBUG   ] Loaded parted as virtual partition
[DEBUG   ] Loaded apt as virtual pkg
[DEBUG   ] Loaded debian_service as virtual service
[DEBUG   ] Loaded useradd as virtual user
[DEBUG   ] Loaded dpkg as virtual lowpkg
[DEBUG   ] Loaded debconfmod as virtual debconf
[DEBUG   ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG   ] Loaded djangomod as virtual django
[DEBUG   ] Loaded cmdmod as virtual cmd
[DEBUG   ] Loaded linux_lvm as virtual lvm
[DEBUG   ] loading returner in ['/var/cache/salt/minion/extmods/returners', '/usr/lib/python2.7/dist-packages/salt/returners']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/returners, it is not a directory
[DEBUG   ] Loaded couchdb_return as virtual couchdb
[DEBUG   ] Loaded syslog_return as virtual syslog
[DEBUG   ] Loaded carbon_return as virtual carbon
[DEBUG   ] Loaded sqlite3_return as virtual sqlite3
[DEBUG   ] loading states in ['/var/cache/salt/minion/extmods/states', '/usr/lib/python2.7/dist-packages/salt/states']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/states, it is not a directory
[DEBUG   ] Loaded saltmod as virtual salt
[DEBUG   ] Loaded pip_state as virtual pip
[DEBUG   ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG   ] Loaded debconfmod as virtual debconf
[DEBUG   ] loading render in ['/var/cache/salt/minion/extmods/renderers', '/usr/lib/python2.7/dist-packages/salt/renderers']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/renderers, it is not a directory
[INFO    ] Executing command 'ps -efH' in directory '/root'
[DEBUG   ] output: UID        PID  PPID  C STIME TTY          TIME CMD
root         2     0  0 15:08 ?        00:00:00 [kthreadd]
root         3     2  0 15:08 ?        00:00:00   [ksoftirqd/0]
root         4     2  0 15:08 ?        00:00:00   [kworker/0:0]
root         5     2  0 15:08 ?        00:00:00   [kworker/u:0]
root         6     2  0 15:08 ?        00:00:00   [migration/0]
root         7     2  0 15:08 ?        00:00:00   [watchdog/0]
root         8     2  0 15:08 ?        00:00:00   [cpuset]
root         9     2  0 15:08 ?        00:00:00   [khelper]
root        10     2  0 15:08 ?        00:00:00   [kdevtmpfs]
root        11     2  0 15:08 ?        00:00:00   [netns]
root        12     2  0 15:08 ?        00:00:00   [sync_supers]
root        13     2  0 15:08 ?        00:00:00   [bdi-default]
root        14     2  0 15:08 ?        00:00:00   [kintegrityd]
root        15     2  0 15:08 ?        00:00:00   [kblockd]
root        16     2  0 15:08 ?        00:00:00   [khungtaskd]
root        17     2  0 15:08 ?        00:00:00   [kswapd0]
root        18     2  0 15:08 ?        00:00:00   [ksmd]
root        19     2  0 15:08 ?        00:00:00   [khugepaged]
root        20     2  0 15:08 ?        00:00:00   [fsnotify_mark]
root        21     2  0 15:08 ?        00:00:00   [crypto]
root        25     2  0 15:08 ?        00:00:00   [kworker/0:2]
root       121     2  0 15:08 ?        00:00:00   [ata_sff]
root       122     2  0 15:08 ?        00:00:00   [mpt_poll_0]
root       123     2  0 15:08 ?        00:00:00   [mpt/0]
root       152     2  0 15:08 ?        00:00:00   [scsi_eh_0]
root       154     2  0 15:08 ?        00:00:00   [scsi_eh_1]
root       155     2  0 15:08 ?        00:00:00   [scsi_eh_2]
root       156     2  0 15:08 ?        00:00:00   [kworker/u:1]
root       192     2  0 15:08 ?        00:00:00   [kdmflush]
root       200     2  0 15:08 ?        00:00:00   [kdmflush]
root       215     2  0 15:08 ?        00:00:00   [jbd2/dm-0-8]
root       216     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root       448     2  0 15:08 ?        00:00:00   [ttm_swap]
root       498     2  0 15:08 ?        00:00:00   [kpsmoused]
root       675     2  0 15:08 ?        00:00:00   [kdmflush]
root       677     2  0 15:08 ?        00:00:00   [kdmflush]
root       679     2  0 15:08 ?        00:00:00   [kdmflush]
root       681     2  0 15:08 ?        00:00:00   [kdmflush]
root       683     2  0 15:08 ?        00:00:00   [kdmflush]
root      1316     2  0 15:08 ?        00:00:00   [jbd2/sda1-8]
root      1317     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1318     2  0 15:08 ?        00:00:00   [jbd2/dm-6-8]
root      1319     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1320     2  0 15:08 ?        00:00:00   [jbd2/dm-5-8]
root      1321     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1322     2  0 15:08 ?        00:00:00   [jbd2/dm-2-8]
root      1323     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1324     2  0 15:08 ?        00:00:00   [jbd2/dm-3-8]
root      1325     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1326     2  0 15:08 ?        00:00:00   [jbd2/dm-4-8]
root      1327     2  0 15:08 ?        00:00:00   [ext4-dio-unwrit]
root      1626     2  0 15:08 ?        00:00:00   [rpciod]
root      1628     2  0 15:08 ?        00:00:00   [nfsiod]
root      2786     2  0 15:08 ?        00:00:00   [flush-254:0]
root      2787     2  0 15:08 ?        00:00:00   [flush-254:2]
root      2788     2  0 15:08 ?        00:00:00   [flush-254:3]
root      2789     2  0 15:08 ?        00:00:00   [flush-254:4]
root      2792     2  0 15:08 ?        00:00:00   [lockd]
root     10694     2  0 15:18 ?        00:00:00   [flush-254:6]
root         1     0  0 15:08 ?        00:00:00 init [2]  
root       349     1  0 15:08 ?        00:00:00   udevd --daemon
root       440   349  0 15:08 ?        00:00:00     udevd --daemon
root       441   349  0 15:08 ?        00:00:00     udevd --daemon
root      1590     1  0 15:08 ?        00:00:00   /sbin/rpcbind -w
statd     1621     1  0 15:08 ?        00:00:00   /sbin/rpc.statd
root      1635     1  0 15:08 ?        00:00:00   /usr/sbin/rpc.idmapd
root      1839     1  0 15:08 ?        00:00:00   /opt/pbis/sbin/lwsmd --start-as-daemon
root      1847  1839  0 15:08 ?        00:00:00     lw-container lwreg
root      1875  1839  0 15:08 ?        00:00:00     lw-container eventlog
root      1925  1839  0 15:08 ?        00:00:00     lw-container netlogon
root      1971  1839  0 15:08 ?        00:00:00     lw-container lwio
root      1991  1839  0 15:08 ?        00:00:02     lw-container lsass
root      2030  1839  0 15:08 ?        00:00:00     lw-container reapsysl
newrelic  1896     1  0 15:08 ?        00:00:00   /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/nrsysmond.pid
newrelic  1898  1896  0 15:08 ?        00:00:00     /usr/sbin/nrsysmond -c /etc/newrelic/nrsysmond.cfg -p /var/run/nrsysmond.pid
root      1999     1  0 15:08 ?        00:00:00   /usr/sbin/rsyslogd -c5
root      2133     1  0 15:08 ?        00:00:00   /usr/sbin/acpid
daemon    2158     1  0 15:08 ?        00:00:00   /usr/sbin/atd
root      2200     1  0 15:08 ?        00:00:00   /usr/sbin/cron
ntp       2229     1  0 15:08 ?        00:00:00   /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 105:109
postgres  2274     1  0 15:08 ?        00:00:00   /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgresql/9.2/main/postgresql.conf
postgres  2276  2274  0 15:08 ?        00:00:00     postgres: checkpointer process                                                                                              
postgres  2277  2274  0 15:08 ?        00:00:00     postgres: writer process                                                                                                    
postgres  2278  2274  0 15:08 ?        00:00:00     postgres: wal writer process                                                                                                
postgres  2279  2274  0 15:08 ?        00:00:00     postgres: autovacuum launcher process                                                                                       
postgres  2280  2274  0 15:08 ?        00:00:00     postgres: stats collector process                                                                                           
root      2354     1  0 15:08 ?        00:00:00   /usr/bin/python /usr/bin/supervisord
root      2621     1  0 15:08 ?        00:00:00   /usr/sbin/vmtoolsd
root      2750     1  0 15:08 ?        00:00:00   /usr/lib/postfix/master
postfix   2875  2750  0 15:08 ?        00:00:00     pickup -l -t fifo -u -c
postfix   2876  2750  0 15:08 ?        00:00:00     qmgr -l -t fifo -u
root      2780     1  0 15:08 tty2     00:00:00   /sbin/getty 38400 tty2
root      2781     1  0 15:08 tty3     00:00:00   /sbin/getty 38400 tty3
root      2782     1  0 15:08 tty4     00:00:00   /sbin/getty 38400 tty4
root      2783     1  0 15:08 tty5     00:00:00   /sbin/getty 38400 tty5
root      2784     1  0 15:08 tty6     00:00:00   /sbin/getty 38400 tty6
postgres  8634     1  0 15:10 ?        00:00:00   /usr/sbin/pgpool -n
postgres  8740  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8741  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8742  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8743  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8744  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8745  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8746  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8747  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8748  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8749  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8750  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8751  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8752  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8753  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8754  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8755  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8756  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8757  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8758  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8759  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8760  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8761  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8762  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8763  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8764  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8765  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8766  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8767  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8768  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8769  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8770  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8771  8634  0 15:10 ?        00:00:00     pgpool: wait for connection request
postgres  8772  8634  0 15:10 ?        00:00:00     pgpool: PCP: wait for connection request
postgres  8773  8634  0 15:10 ?        00:00:00     pgpool: worker process
postgres  8635     1  0 15:10 ?        00:00:00   logger -t pgpool -p local0.info
root      8901     1  0 15:11 ?        00:00:00   /usr/sbin/sshd
root      8997  8901  0 15:11 ?        00:00:00     sshd: user1 [priv]
user1    8999  8997  0 15:11 ?        00:00:00       sshd: user1@pts/0
user1    9000  8999  0 15:11 pts/0    00:00:00         -bash
root      9087  9000  0 15:11 pts/0    00:00:00           sudo su
root      9088  9087  0 15:11 pts/0    00:00:00             su
root      9089  9088  0 15:11 pts/0    00:00:00               bash
root     10755  9089 46 15:18 pts/0    00:00:00                 /usr/bin/python /usr/bin/salt-call state.sls environments.all.ssh -l debug
root     10783 10755  0 15:18 pts/0    00:00:00                   ps -efH
root      8995     1  0 15:11 tty1     00:00:00   /sbin/getty 38400 tty1
root     10187     1  0 15:12 ?        00:00:00   /usr/bin/python /usr/bin/salt-minion -d
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Reading configuration from /etc/salt/minion
[DEBUG   ] loading grain in ['/var/cache/salt/minion/extmods/grains', '/usr/lib/python2.7/dist-packages/salt/grains']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/grains, it is not a directory
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Decrypting the current master AES key
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[DEBUG   ] Loaded minion key: /etc/salt/pki/minion/minion.pem
[INFO    ] Loading fresh modules for state activity
[DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
[DEBUG   ] Loaded localemod as virtual locale
[DEBUG   ] Loaded groupadd as virtual group
[DEBUG   ] Loaded linux_sysctl as virtual sysctl
[DEBUG   ] Loaded sysmod as virtual sys
[DEBUG   ] Loaded parted as virtual partition
[DEBUG   ] Loaded apt as virtual pkg
[DEBUG   ] Loaded debian_service as virtual service
[DEBUG   ] Loaded useradd as virtual user
[DEBUG   ] Loaded dpkg as virtual lowpkg
[DEBUG   ] Loaded debconfmod as virtual debconf
[DEBUG   ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG   ] Loaded djangomod as virtual django
[DEBUG   ] Loaded cmdmod as virtual cmd
[DEBUG   ] Loaded linux_lvm as virtual lvm
[DEBUG   ] loading states in ['/var/cache/salt/minion/extmods/states', '/usr/lib/python2.7/dist-packages/salt/states']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/states, it is not a directory
[DEBUG   ] Loaded saltmod as virtual salt
[DEBUG   ] Loaded pip_state as virtual pip
[DEBUG   ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG   ] Loaded debconfmod as virtual debconf
[DEBUG   ] loading render in ['/var/cache/salt/minion/extmods/renderers', '/usr/lib/python2.7/dist-packages/salt/renderers']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/renderers, it is not a directory
[DEBUG   ] loading module in ['/var/cache/salt/minion/extmods/modules', '/usr/lib/python2.7/dist-packages/salt/modules']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/modules, it is not a directory
[DEBUG   ] Loaded localemod as virtual locale
[DEBUG   ] Loaded groupadd as virtual group
[DEBUG   ] Loaded linux_sysctl as virtual sysctl
[DEBUG   ] Loaded sysmod as virtual sys
[DEBUG   ] Loaded parted as virtual partition
[DEBUG   ] Loaded apt as virtual pkg
[DEBUG   ] Loaded debian_service as virtual service
[DEBUG   ] Loaded useradd as virtual user
[DEBUG   ] Loaded dpkg as virtual lowpkg
[DEBUG   ] Loaded debconfmod as virtual debconf
[DEBUG   ] Loaded virtualenv_mod as virtual virtualenv
[DEBUG   ] Loaded djangomod as virtual django
[DEBUG   ] Loaded cmdmod as virtual cmd
[DEBUG   ] Loaded linux_lvm as virtual lvm
[DEBUG   ] Fetching file ** attempting ** 'salt://environments/all/ssh.sls'
[INFO    ] Fetching file ** skipped **, latest already in cache 'salt://environments/all/ssh/init.sls'
[DEBUG   ] Jinja search path: '['/var/cache/salt/minion/files/base']'
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/base/environments/all/ssh/init.sls:




  
/home/postgres/.ssh/config:
  file.managed:
    - template: jinja
    - user:     postgres
    - group:    postgres
    - mode:     600
    - makedirs: True
    - contents_pillar: ###################### NOTE #########################\n# This file is managed by salt, do not edit locally #\n# All changes will be overwritten by salt!          #\n###################### NOTE #########################\nStrictHostKeyChecking no\nUserKnownHostsFile /dev/null
  


[DEBUG   ] Results of YAML rendering: 
OrderedDict([('/home/postgres/.ssh/config', OrderedDict([('file.managed', [OrderedDict([('template', 'jinja')]), OrderedDict([('user', 'postgres')]), OrderedDict([('group', 'postgres')]), OrderedDict([('mode', 600)]), OrderedDict([('makedirs', True)]), OrderedDict([('contents_pillar', None)])])]))])
[INFO    ] Executing state file.managed for /home/postgres/.ssh/config
[INFO    ] File /home/postgres/.ssh/config is in the correct state
[DEBUG   ] loading output in ['/var/cache/salt/minion/extmods/output', '/usr/lib/python2.7/dist-packages/salt/output']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/output, it is not a directory
[DEBUG   ] Loaded no_out as virtual quiet
[DEBUG   ] Loaded json_out as virtual json
[DEBUG   ] Loaded yaml_out as virtual yaml
[DEBUG   ] Loaded pprint_out as virtual pprint
[DEBUG   ] loading output in ['/var/cache/salt/minion/extmods/output', '/usr/lib/python2.7/dist-packages/salt/output']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/output, it is not a directory
[DEBUG   ] Loaded no_out as virtual quiet
[DEBUG   ] Loaded json_out as virtual json
[DEBUG   ] Loaded yaml_out as virtual yaml
[DEBUG   ] Loaded pprint_out as virtual pprint
local:
    ----------
    local:
        ----------
        file_|-/home/postgres/.ssh/config_|-/home/postgres/.ssh/config_|-managed:
            ----------
            __run_num__:
                0
            changes:
                ----------
            comment:
                File /home/postgres/.ssh/config is in the correct state
            name:
                /home/postgres/.ssh/config
            result:
                True
root@minion01:/mnt/netjitsu/Linux/home/user1# cat /home/postgres/.ssh/config

root@minion01:/mnt/netjitsu/Linux/home/user1# ls -lac /home/postgres/.ssh/config
-rw------- 1 postgres postgres 1 Oct 17 15:18 /home/postgres/.ssh/config
--snip


What's flooring me is that multiline pillar values have worked for me before, many times w/o issue using my original pillar setup (tho without pillar.get).

schlag

unread,
Oct 17, 2013, 3:43:20 PM10/17/13
to salt-...@googlegroups.com
ok, i tried using '| indent(8)' to force spacing, which *seems to render appropriately


{% if pillar['ssh_configs'] is defined %}
  {% for file, fileargs in pillar['ssh_configs'].iteritems() %}
{{ file }}:
  file.managed:
    - template: jinja
    - user:     {{ fileargs['user'] }}
    - group:    {{ fileargs['group'] }}
    - mode:     {{ fileargs['mode'] }}
    - makedirs: True
    - contents_pillar: | 
        {{ fileargs['content'] | indent(8) }}
  {% endfor %}
{% endif %}

salt-call state.sls environments.all.ssh -l debug
....
....
[DEBUG   ] Rendered data from file: /var/cache/salt/minion/files/base/environments/all/ssh/init.sls:




  
/home/postgres/.ssh/config:
  file.managed:
    - template: jinja
    - user:     postgres
    - group:    postgres
    - mode:     600
    - makedirs: True
    - contents_pillar: | 
        ###################### NOTE #########################
        # This file is managed by salt, do not edit locally #
        # All changes will be overwritten by salt!          #
        ###################### NOTE #########################
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null
  


[DEBUG   ] Results of YAML rendering: 
OrderedDict([('/home/postgres/.ssh/config', OrderedDict([('file.managed', [OrderedDict([('template', 'jinja')]), OrderedDict([('user', 'postgres')]), OrderedDict([('group', 'postgres')]), OrderedDict([('mode', 600)]), OrderedDict([('makedirs', True)]), OrderedDict([('contents_pillar', '###################### NOTE #########################\n# This file is managed by salt, do not edit locally #\n# All changes will be overwritten by salt!          #\n###################### NOTE #########################\nStrictHostKeyChecking no\nUserKnownHostsFile /dev/null\n')])])]))])
[INFO    ] Executing state file.managed for /home/postgres/.ssh/config
[WARNING ] /usr/lib/python2.7/dist-packages/salt/modules/file.py:2002: DeprecationWarning: With-statements now directly support multiple context managers
  salt.utils.fopen(name, 'rb')) as (src, name_):

[INFO    ] File /home/postgres/.ssh/config is in the correct state
[DEBUG   ] loading output in ['/var/cache/salt/minion/extmods/output', '/usr/lib/python2.7/dist-packages/salt/output']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/output, it is not a directory
[DEBUG   ] Loaded no_out as virtual quiet
[DEBUG   ] Loaded json_out as virtual json
[DEBUG   ] Loaded yaml_out as virtual yaml
[DEBUG   ] Loaded pprint_out as virtual pprint
[DEBUG   ] loading output in ['/var/cache/salt/minion/extmods/output', '/usr/lib/python2.7/dist-packages/salt/output']
[DEBUG   ] Skipping /var/cache/salt/minion/extmods/output, it is not a directory
[DEBUG   ] Loaded no_out as virtual quiet
[DEBUG   ] Loaded json_out as virtual json
[DEBUG   ] Loaded yaml_out as virtual yaml
[DEBUG   ] Loaded pprint_out as virtual pprint
local:
    ----------
    local:
        ----------
        file_|-/home/postgres/.ssh/config_|-/home/postgres/.ssh/config_|-managed:
            ----------
            __run_num__:
                0
            changes:
                ----------
            comment:
                File /home/postgres/.ssh/config is in the correct state
            name:
                /home/postgres/.ssh/config
            result:
                True



unfortunately, the resulting file is still empty

schlag

unread,
Oct 17, 2013, 3:47:31 PM10/17/13
to salt-...@googlegroups.com
got it.  instead of contents_pillar, i went back with just "content", and it worked.  (anyone have ideas on why contents_pillar didn't work?)

anyway, the new state:

--snip
{% if pillar['ssh_configs'] is defined %}
  {% for file, fileargs in pillar['ssh_configs'].iteritems() %}
{{ file }}:
  file.managed:
    - template: jinja
    - user:     {{ fileargs['user'] }}
    - group:    {{ fileargs['group'] }}
    - mode:     {{ fileargs['mode'] }}
    - makedirs: True
    - contents: | 
        {{ fileargs['content'] | indent(8) }}
  {% endfor %}
{% endif %}
--snip


rendering:
--snip
/home/postgres/.ssh/config:
  file.managed:
    - template: jinja
    - user:     postgres
    - group:    postgres
    - mode:     600
    - makedirs: True
    - contents: |
        ###################### NOTE #########################
        # This file is managed by salt, do not edit locally #
        # All changes will be overwritten by salt!          #
        ###################### NOTE #########################
        StrictHostKeyChecking no
        UserKnownHostsFile /dev/null
--snip

end result:

--snip
cat /home/postgres/.ssh/config
###################### NOTE #########################
# This file is managed by salt, do not edit locally #
# All changes will be overwritten by salt!          #
###################### NOTE #########################
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
--snip


yay.  thanks again for your suggestions.

Mrten

unread,
Oct 17, 2013, 4:36:03 PM10/17/13
to salt-...@googlegroups.com
On 17/10/2013 21:47 , schlag wrote:
> got it. instead of contents_pillar, i went back with just "content",
> and it worked. (anyone have ideas on why contents_pillar didn't work?)

I think you're supposed to supply a pillar /name/ to contents_pillar,
not the actual value. There's an added level of indirection.

In your case, something like "ssh_configs:postgres".

M.

Reply all
Reply to author
Forward
0 new messages