Agenda Attendees- Webb Brown
- Mark Poko
- Nickolas Kraus
- Matt Ray
- David Sterz
Notes- Consensus is that this should be the basis going forward, we’ll build off of this with future PRs rather than start with a lot of existing code which may have unnecessary content Learning towards simple helm chart start – pull/1 + Nickolas
- Should we add helm chart to prometheus community repo? Yes!
- Here’s Grafana jsonnet’s solutions – proposal is let’s link to documentation @Mark Poko
- David: Kubernetes days in Amsterdam, FinOps Days, Kubecon, Lightning talk
- Add an Events tab to the website and add them to the Calendar
- Cloud costs: should we add them sooner rather than later? Critical over time. Lots of discussion to have, e.g. idle cost for databases. To begin exploring.
- Open Question: should these be in Prometheus?
- Effort to show OpenCost metrics in Grafana dashboards
- David Sterz + Mark Poko to draft a doc capturing some dashboards and needs
- Reviewed: https://wellarchitectedlabs.com/cost/200_labs/200_cloud_intelligence/
The OpenCost Working Group meeting notes for this (and future and previous) meeting are available here:
https://docs.google.com/document/d/1JFB_-sjF8F9UWet1c-dWixdMZY4hri23UlQG5FX5xfY/edit#heading=h.shpe8c3bu23mIf you're interested in continuing the conversation, feel free to respond or join us in the #opencost channel of CNCF Slack. The next session is January 12, 2022. Subscribe to the calendar.
Thanks,
Matt Ray
Senior Community Manager for OpenCost - Kubecost
mat...@kubecost.comI have an idea for a podcast guest that I think would be interesting to your audience. Craig Box is the VP of Open Source at ARMO, makers of Kubescape. Craig joined Armo from Google where he worked on Kubernetes and the Istio service mesh, but he's best known as the founder of the official Kubernetes podcast.
Kubernetes users face a dilemma: invest in an expensive proprietary security platform or try to build their own security solution from multiple narrowly-functional open-source tools.
Kubescape solves this dilemma: it's the first completely open-source Kubernetes security platform, allowing small (and large) DevOps to develop secure programs without forking out a fortune.
Kubescape scans Kubernetes clusters, YAML files and HELM charts for misconfigurations, potential vulnerabilities and issues with their user configurations. The platform supports multiple security and compliance frameworks like NSA and MITRE and also allows businesses to create their own customized frameworks.
Craig can talk about:
Kubernetes in general or Kubernetes security (he's pretty well-versed in this area)
Kubescape, built with DevOps in mind, and how that affected the product.
The advantage of an open-source security tool
Topic 1 - Welcome to the show. Tell us a little bit about your background, and where you focus on Cloud Cost Management today.
Topic 2 - The cloud has been around for more than a decade. Why does it seem like Cloud Cost Management has suddenly become such a big topic of discussion over the last couple years - is it mostly the pandemic and economy (interest rates), or more driven by end-user behaviors….or both?
Topic 3 - What do best practices for Cost Management look like today? It is mostly about having the right monitoring tools, or is it how groups are organized, or something else?
Topic 4 - How in the world can any finance person understand the nuances of all the services that are available in the cloud, and how they might impact both architecture and ultimately the bill? Who needs to be educating them?
Topic 5 - How do most companies engage around cost management? Does it start with cost management tools and then bring in consultants (e.g. Corey Quinn) if they can’t figure things out? Or is it the CFO getting heavy-handed and setting strict policies?
Topic 6 - Let’s talk about OpenCost.
Topic 7 - Given all the variables involved, can you be useful in the cloud cost management space if they don’t have a strong background in building applications or architecture?
Topic 8 - What are some of the areas of cloud cost management that are most interesting to you today?
FEEDBACK?
Email: show at the cloudcast dot net
Twitter: @thecloudcastnet
Southern California Linux Expo (Mar 9 - 12)
SREcon23 (Mar 21 - 23)
KubeCon EU (Apr 17 - 21)
Open Source Summit NA (May 10 - 12)
FinOps X (Jun 27 - 30)
Open Source Summit EU (Sep 19 - 21)
KubeCon NA (Nov 6 - 10)
Workload Aggregations
container
pod
deployment
statefulset
job
controller name
controller kind
label
annotation
namespace
cluster
Salmon don (Teriyaki chicken don)
Karage curry
vegetable gyoza
Following up from yesterday’s call’s question on reporting.
With K8s Allocation Data > Aggregate by "label:their-department-k8s-label"
And with Data Source Grouping > Asset Label "label:their-department-aws-tag"
Kubecost CEO @Webb is hosting webinar tomorrow at 1pm EST on Kubernetes Cost Optimization.
https://hello.kubecost.com/* Monitor your cloud environments running Kubernetes
* take actionable steps to reduce cloud spend
* Receive dynamic recommendations for reducing spend without sacrificing performance
Modes for selecting “Single Aggregation” vs. “Multi Aggregation” when building Asset and Allocation Reports. This makes the process of creating reports easier and gives more clarity to what is contained within the Cost Allocation report.
This allows users to set their own custom CPU and RAM utlization goals for container request rightsizing.
Cloud Report alerts issue an overview on the state of the assets over the specified number of days. This is an overview type summary that can be sent to an email or slack channel for visibility into non-Kubernetes cloud assets.
V2 filters in Kubecost's Allocation APIs via a new query parameter: filter=. The major improvement for V2 filters is supporting inequality operations, allowing exclusion of namespaces for example.
The old filters (filterNamespaces=x), and therefore old queries, will continue to work. This PR first checks if v2 filters are present (filter=) and if they aren't it falls back to v1 filters.
The major improvement v2 filters bring is support for inequality operations. For example, if you want to exclude the kube-system namespace from your results: /model/allocation?window=1d&filter=namespace!:"kube-system". See the full documentation for the new query language here. Note: the language does not yet support but will soon be extended to support prefix queries, like v1 filters support (in v1, this looks like: filterNamespaces=kube*).
The diff API provides a diff of two windows that returns all the added, removed, or cost changed assets from the later window (before parameter) to the earlier window (after parameter). This endpoint does a comparison of two asset sets in the given windows and accumulates the results.
The endpoint is available at
```
http://<kubecost-address>/model/assets/diff
-$424.59
-$1,094.02
-$425.39
-$1,026.10
-$265.28
-$894.21
-$248.78
-$999.18
Following up on Monday's call:
1) Would a fortnightly call on Tuesdays at the same time work for you? Tuesday is a little friendlier to US-based folks if they end up on the call.
2) In trying to reproduce your shared PV issue, could we get a backup of your ETL?
https://github.com/kubecost/etl-backup This should make Engineering's job a little easier.
FinOps shows how every dollar spent in the cloud is tracked and correlates with optimizing every dollar made with the cloud spend.
Joseph & Philip
Kate, Kirby and Matt
Azure Rate Card API settings aren't working, need accurate
Set for China(?)
Credentials were valid, still not getting results
Need to manage chargebacks, wide variety of internal customer sizes and usagexs
Would like more reports
Jason, Keith, Kirby, and Matt with Dale Deputy and John Marszalek
* Introductions again
* Dale: technical director, manages AirFlow/Astronomer(?) for internal teams
* John: senior cloud engineer, manages clusters & Kubecost, the "technical guy"
* AWS (EKS) & GCP, cloud costs are rising, need to share info with game teams for optimizations
* FY2023 will be sharing costs with game studios
* Kubecost for estimates for the studios, want a walkthrough
* AirFlow/Astronomer per namespace, want to get underlying cloud costs too (S3/networking/etc.)
* Want to filter out Kubecost/Prometheus/Grafana from reports of namespaces
* Will need to tag everything, currently using Terraform and will need to update accordingly
* Probably not(?) using federated Kubecost, need to confirm, but want to expand to more clusters
* Followups
* Kirby/Dale intro to product call
* Need to upgrade to latest
* Kubecost is currently running on the production Astronomer EKS cluster, would like to move it to a separate cluster
* Will want to convert to SSO with EA-wide Okta
* Slack connect needs to be sorted
14:00 (SYD) - 18:45 (NAN)
Flight number FJ 910
21:40 (NAN) - 11:50 (LAX)
Flight number FJ 810
15:25 (LAX) - 20:14 (AUS)
Flight number FJ 5081
03YC3F0
Manage your account »
2022-05-26
Kirby and Matt call with Satish Mandoddi, John Marszalek, and Dale Deputy.
* Introductions for Matt and Dale (new technical director replacing Mike(?))
* This is the central team, Eric & Martin have spun up their own separate Kubecost cluster for Horizon(?).
* Satish is leaving for another team within EA (still doing K8s), John will be point going forward.
* They will want to integrate Kubecost data via API into other platforms for chargeback purposes
* Want to break down by studios and filter in/out
* Followups
* Already scheduled recurring fortnightly call Thursdays 3pm Pacific
* Next call will start tracking updates, bugs, etc. after Dale familiarizes with the product
* Slack Connect needs to be sorted
315 Montgomery St, Floor 9
---
title: "How to Install and Manage Kubecost Helm Chart using Lens IDE"
description: "Learn how you can use the Lens IDE to easily install, update, and troubleshoot Kubecost on any Kubernetes cluster."
date: 2022-05-10T08:00:00-04:00
canonical_url: "
https://blog.kubecost.com/blog/how-to-install-and-manage-kubecost-helm-using-lens"
classes: wide
categories:
- blog
tags:
- Kubernetes
- Kubecost
- Cost Monitoring
- Lens
- Helm
author: Andrew Dawson
---
Using the Lens IDE makes installing, updating, and troubleshooting your Kubecost Helm release extremely simple. Follow the guide below to set up Kubecost using Lens IDE.
![Kubecost and Lens](/assets/images/lens-with-kubecost/kubecost-lens-11.png)
# Kubecost + Lens IDE.
When working with Kubecost and your Kubernetes cluster as a whole, visibility is key. Being able to quickly see how different objects are interacting with each other is key to optimizing your cluster. Although the command line is all-powerful, sometimes k8s developers want to interact with a simple graphical interface— that’s where [Lens, the Kubernetes Integrated Development Environment (IDE) by Mirantis](
https://k8slens.dev/), comes in.
Kubecost users can connect their clusters to Lens, allowing for simple multi-cluster administration. Each cluster 'workspace' has its own built-in terminal set to the corresponding kubeconfig entry, making it quick and easy to use the command line + the correct kubeconfig settings. This can be especially useful for organizations using [Kubecost Enterprise with multiple federated clusters](
https://guide.kubecost.com/hc/en-us/articles/4407601809175-Kubecost-Enterprise-Features).
![Lens Default Dashboard](/assets/images/lens-with-kubecost/kubecost-lens-1.png)
### Install + Manage Kubecost Enterprise via Helm Chart using Lens IDE
One of the benefits of Kubecost is our easy to manage [Helm chart](
https://guide.kubecost.com/hc/en-us/articles/4407601821207-Installing-KubecostUsing). Lens makes installing, updating, and troubleshooting your Kubecost Helm release extremely simple.
### Simplify Kubecost port forwarding, logs, and pod shells
Using Lens, users can verify all Kubecost related pods are running and see any errors easily using the cluster dashboard and specifying your Kubecost namespace. This is helpful when deploying Durable Storage or troubleshooting a custom integration or settings change. Lens also allows Kubecost users to get to the cost-model container logs and container shell with a few clicks.
### Port-Forward without the CLI Command
Lens allows for simple and secure access to Kubecost services via port forwarding. For those who do not want to expose Kubecost via Ingress or Load Balancer, Lens makes accessing via Service Port Forwarding very easy. Team members with access to the cluster through Lens can access Kubecost securely through the 'Services' section.
## Follow the guide below to set up Kubecost Helm release using Lens:
### Step 0: Generate a kubeconfig entry for your cluster.
Prerequisite: You will need to have active kubeconfig entry for a kubernetes cluster. In this example, we use Google Kubernetes Engine, but most k8s clusters can be added with this same method.
Generate a config file on your local machine for your cluster. To do this for your GKE cluster, you can click "Connect" on the top of the console. Copy the command-line access command onto your clipboard.
Paste the command into your terminal on your local machine. You will need to have Google Cloud CLI installed on your machine and configured to your project.
### Step 1: Download the latest version of Lens IDE.
Download [Lens, the Kubernetes IDE by Mirantis](
https://k8slens.dev/) and install in on your local machine.
### Step 2: Add your cluster to Lens.
Open Lens on your local machine and click on "Browse Clusters in Catalog." You will see your cluster in the list, but it will be marked "disconnected".
![Click 'Browse Clusters in Catalog'](/assets/images/lens-with-kubecost/kubecost-lens-2.png)
Click "Connect" on the cluster, and Lens will start the connection process.
![Click on 'Connect'](/assets/images/lens-with-kubecost/kubecost-lens-3.png)
Once connected, you’ll have a full graphical view of your K8s objects, as well as a built-in terminal connected to that cluster.
![Cluster View](/assets/images/lens-with-kubecost/kubecost-lens-4.png)
### Step 3: Give your cluster a friendly name and other optional settings.
Go to the top left corner and drop down the menu, then click "Settings". Within the settings, you can change the cluster display name, upload an icon, and adjust other settings such as for the cluster.
![General Settings](/assets/images/lens-with-kubecost/kubecost-lens-5.png)
### Step 4: Prep your cluster for Kubecost installation
Now that the cluster is set up in Lens, open the in-product Terminal. Create the namespace for your new Kubecost installation with the command `kubectl create namespace kubecost`.
![Create 'kubecost' namespace](/assets/images/lens-with-kubecost/kubecost-lens-6.png)
Once the namespace is created, add the Helm repo for Kubecost with the command `helm repo add kubecost
https://kubecost.github.io/cost-analyzer/`![Get the updated Kubecost repo from Helm](/assets/images/lens-with-kubecost/kubecost-lens-7.png)
### Step 5: Install the Kubecost Helm release using the Lens console.
On the left side of the Lens menu, navigate to to "Helm", then click "Charts". Search for "cost" in the search bar. You will see the Helm Chart for the Kubecost cost-analyzer that was added in the previous step.
![Locate the Kubecost Helm chart](/assets/images/lens-with-kubecost/kubecost-lens-8.png)
Click the chart and the side information panel will pop up. You can pick the version of Kubecost you want to install, as well as view common [Kubecost Helm values](
https://github.com/kubecost/cost-analyzer-helm-chart/blob/master/cost-analyzer/values.yaml)
![View Kubecost chart information](/assets/images/lens-with-kubecost/kubecost-lens-9.png)
After clicking Install, you are given the option to modify different options before deploying. Select the "Kubecost" namespace we created in Step 4, and give the release a friendly name like "kubecost". The Helm values are shown in the terminal window and can be edited before deployment.
Click the "Install" button and your Helm Chart will install.
![View Kubecost chart information](/assets/images/lens-with-kubecost/kubecost-lens-10.png)
### Step 6: Verify your Kubecost release is available and view objects
Within Lens, click on "Workloads". Select the Kubecost namespace to see all the Kubenetes objects that have beeen instaled via the Helm release. You should see all green. If you see any workloads that are unavailable, your cluster may not have enough resources to run Kubecost.
![View Kubecost workloads in Lens](/assets/images/lens-with-kubecost/kubecost-lens-11.png)
### Step 7: Using the "Helm/Release" section of Lense to manage your Kubecost release.
Once Kubecost is available, you can click on "Helm/Releases" to see your release within Lens.
![View Helm releases](/assets/images/lens-with-kubecost/kubecost-lens-12.png)
The Helm Release section allows you to update the Kubecost version and Helm values easily. Advanced options can be configured through the values file, like bringing in [external non-k8s cloud costs from AWS, GCP and Azure accounts](
https://guide.kubecost.com/hc/en-us/articles/4412369153687-Cloud-Integrations), [SSO setup](
https://guide.kubecost.com/hc/en-us/articles/4407595985047-User-Management-SSO-SAML), and more.
![Helm release details](/assets/images/lens-with-kubecost/kubecost-lens-13.png)
Under Helm → Releases, click on the right side drop down menu and select Upgrade. This pop up screen allows you to add and edit Helm Values right within Lens, as well as upgrade your Kubecost version to the latest release by selecting it from the "Upgrade version" drop down.
![Upgrade to new Kubecost version via Helm release](/assets/images/lens-with-kubecost/kubecost-lens-14.png)
### Step 8: Access Kubecost using port forwarding through Lens
To access Kubecost, go to "Services" and fine the kubecost-cost-analyzer service. Scroll down, and find the "Connection/Ports" section. You can click "Forward" and set the port to 9090.
![Select port 9090 via Services](/assets/images/lens-with-kubecost/kubecost-lens-19.png)
![Forward Port 9090 via Services](/assets/images/lens-with-kubecost/kubecost-lens-18.png)
If you have multiple clusters, they can't all use port 9090. In this case, you can use a randomized port. To do this, click on the link text for 'Port 9090' directly.
![Click on port 9090 link via Services](/assets/images/lens-with-kubecost/kubecost-lens-15.png)
Kubecost will open on a randomized port - Copy the url from the URL bar and paste it into the "Add New Cluster"
![Add new port to Kubecost](/assets/images/lens-with-kubecost/kubecost-lens-16.png)
You will now be able to access your cluster! The Kubecost core version is free forever on one individual cluster per company, with 15 day metrics retention. [Kubecost Enterprise](
https://guide.kubecost.com/hc/en-us/articles/4407601809175-Kubecost-Enterprise-Features) provides federated views for multiple clusters, SSO, unlimited metrics, and dedicated support.
![Access Kubecost dashboard](/assets/images/lens-with-kubecost/kubecost-lens-17.png)
## We’re here to help!
[Join us on Slack](
https://join.slack.com/t/kubecost/shared_invite/enQtNTA2MjQ1NDUyODE5LWFjYzIzNWE4MDkzMmUyZGU4NjkwMzMyMjIyM2E0NGNmYjExZjBiNjk1YzY5ZDI0ZTNhZDg4NjlkMGRkYzFlZTU) for any other help, and general Kubernetes and Cloud Cost optimization banter!
This is Matt Ray from 8009 Bottlebrush Drive, Austin 78750. Please go ahead with the mold protocol at the house ASAP, the lockbox code is 8377. My email address is
matth...@gmail.com and Leslie from Dry Force said to send the invoice directly to them. Thanks!
This is Matt Ray from 8009 Bottlebrush Drive, Austin 78750. Please go ahead with the Mold Protocol at the house ASAP, the lockbox code is 8377. My email address is
matth...@gmail.com and Leslie from Dry Force said to send the invoice directly to them. Thanks!
Wksh6+K/ao1t4FD3uUedag==
A developer’s dream is a devops engineer’s nightmare
Google's domain name registrar is out of beta after seven years
Thank you for being my wonderful partner in all things, happy birthday my love!
You are smart, funny, beautiful, and caring and I'm lucky to have you in my life.
Every year with you is a treasure, we love you forever!
old: 3085 2200 4852 9271
new: 3085 2201 4528 9555
https://mattray.devEufy RoboVac35c Wi-Fi Robotic Vacuum
The middle right button lights up, but the lights do not come on. The bottom right switch doesn't seem to control anything either, and no combination turns on the overhead lights. The other switches control lights. I switched the bulbs out with known working bulbs and that had no effect either.
# Default values
nameOverride: ""
fullnameOverride: ""
replicaCount: 1
image:
registry: "
gcr.io/triggermesh"
pullPolicy: "IfNotPresent"
tag: ""
imagePullSecrets: []
rbac:
create: true
serviceAccount:
create: true
annotations: {}
name: ""
podAnnotations: {}
podSecurityContext: {}
securityContext:
allowPrivilegeEscalation: false
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
webhook:
podAnnotations:
sidecar.istio.io/inject: 'false'
podSecurityContext: {}
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
Dear Aidan,
Congratulations my son for everything you have achieved! Today we are thrilled at your accomplishments, you've worked hard and applied yourself and I have no doubt you will go far. I am proud of you and I know everyone in our family has always known you would excel, we can't wait to see what you can accomplish on the journey you've started for yourself.
We love you always,
Dad
https://github.com/triggermesh/mq-eventsource/blob/main/koby.yamlhttps://github.com/triggermesh/mq-eventsource/blob/main/config/koby.yamlLarge Salmon Aburi Don
Large Karage Curry
Bento Box + 2 vege tempura
Mix Tempura Udon + Spicy sauce
Veg fried rice
Vege tempura
Large Edamame
Under penalties of perjury, I declare that I have examined this return and accompanying schedules and statements, and to the best of my knowledge and belief, they are true,
Sign
Here
Joint return? See instructions. Keep a copy for your records.
Paid
Preparer
Use Only SUSAN LUU
correct, and complete. Declaration of preparer (other than taxpayer) is based on all information of which preparer has any knowledge.
= Your signature
Spouse's signature. If a joint return,
Date
If the IRS sent you an Identity Protection PIN,
enter it here
If the IRS sent you an Identity Protection PIN,
enter it here
Firm's EIN
P00953621 33-1197384 X 3rd Party Designee
Self-employed
Form 1040 (2018)
Your occupation
SOFTWARE DEVELOPER
Date
Spouse's occupation
RESEARCH ASSOCIATE
both
must sign.
Preparer's signature PTIN
Preparer's name
Check if:
640889084
AWS Lambda
Alibaba Object Storage Service
Amazon Comprehend
Amazon DynamoDB
Amazon Kinesis
Amazon S3
Amazon SNS
Amazon SQS
Confluent
Datadog
Elasticsearch
Google Cloud Firestore
Google Cloud Storage
Google Cloud Workflows
Google Sheets
HTTP
Hasura
Infra
Jira
Logz
Oracle
Salesforce
SendGrid
Slack
Splunk
Tekton
Twilio
UiPath
Zendesk
AWS CodeCommit
Amazon CloudWatch
Amazon CloudWatch Logs
Amazon Cognito Identity
Amazon Cognito User
Amazon DynamoDB
Amazon Kinesis
Amazon RDS Performance Insights
Amazon SNS
Amazon SQS
Azure Activity Logs
Azure Blob Storage
Azure Event Grid
Azure Event Hubs
Azure IoT Hub
Azure Queue Storage
Azure Service Bus
Google Cloud Audit Logs
Google Cloud Billing
Google Cloud Pub/Sub
Google Cloud Source Repositories
Google Cloud Storage
HTTP Poller
Oracle Cloud Infrastructure
Salesforce
Slack
Twilio
Webhook
Zendesk
AWS CloudWatchLogsSource
AWS CloudWatchSource
AWS CodeCommitSource
AWSCognitoIdentitySource
AWSCognitoUserPoolSource
AWSDynamoDBSource
AWSKinesisSource
AWSPerformanceInsightsSource
AWSS3Source
AWSSNSSource
AWSSQSSource
AzureActivityLogsSource
AzureBlobStorageSource
AzureEventGridSource
AzureEventHubSource
AzureIOTHubSource
AzureQueueStorageSource
AzureServiceBusQueueSource
GoogleCloudAuditLogsSource
GoogleCloudBillingSource
GoogleCloudPubSubSource
GoogleCloudRepositoriesSource
GoogleCloudStorageSource
HTTPPollerSource
OCIMetricsSource
SalesforceSource
SlackSource
TwilioSource
WebhookSource
ZendeskSource
Today’s show is sponsored by strongDM.
Are you still using SSH keys, RDP logins, and database credentials?
Do you have a worn out post it note with all of your passwords on it?
Well, it’s time to access your Infrastructure like it’s no longer 1999.
strongDM is the only modern infrastructure access platform.
It creates a seamless, secure, and observable air-gap between your staff and the critical infrastructure that powers your company.
With strongDM you can:
- Instantly revoke access to every database, kubernetes cluster or server with just a click.
- You can automatically log every query, ssh & kubectl command to know who did what, when and where across your stack.
- And you can eliminate credentials from end-user workflows to deploy access that’s zero-trust and least-privileged by default
Trusted by the fine folks at Betterment, Peloton, SoFi, and Chime, strongDM is the only way to deploy secure access controls in a way folks love to use.
Don't take my word on it,
Check out strongDM for yourself with a free demo.
Sign up at
strongdm.com/sdt.
That’s strongDM.com/sdt
And, of course, we thank strongDM for sponsoring our show
001009872463402001
001009872463402001
https://slack.com/oauth/authorize?&client_id=CLIENT_ID&team=TEAM_ID&install_redirect=install-on-team&scope=admin+clienthttps://slack.com/oauth/authorize?&client_id=2551509539892.2554918939746&team=triggermesh-community&install_redirect=install-on-team&scope=admin+clientCLIENT_ID
2551509539892.2554918939746
TEAM_ID
triggermesh-community
User OAuth Token
xoxp-2551509539892-2542593833910-2554922782114-e7a17b0dd80577441f63c56c1dbb578a
curl -X POST '
https://triggermesh-community.slack.com/api/users.admin.invite' \
--data 'email=
tes...@mattray.dev&token=xoxp-2551509539892-2542593833910-2554922782114-e7a17b0dd80577441f63c56c1dbb578a&set_active=true' \
--compressed
On today's episode of the Cloud Native Application Flows podcast, Mark and I are joined by cloud innovator and Cisco Distinguished engineer Mike Dvorkin. Mike is one of the original architects of Cisco's UCS and their Application Centric Infrastructure and Application Policy Infrastructure Controller. He's also an advisor and investor in numerous infrastructure and technology startups. We hope you enjoy this wide ranging conversation over the origin story of UCS, the cost-benefit analysis of cloud repatriation, policy as code, and AI for cows.
AI for cows and the making Kubernetes safe for consumers.
"Farmers don't care about Kubernetes"
"If you succeed well enough you put a target on your back"
consuming abstractions versus accessing Kubernetes directly
how do we make our tools safe?
the legacy of Cabletron
the origin story of Cisco's UCS
The coming wave of cloud repatriation?
AI for cows.
guardrails for safety
Nobody should care how the packets get from A to B
Moving on to better problems to solve
Cisco Distinguished Engineer
Nuova Systems - UCS architect
Insieme Networks - ACI (Application Centric Infrastructure) and APIC (Application Policy Infrastructure Controller)
Noiro - Open source policy declaration and enforcement mechanisms
Amplify Partners - Advisor and investor
LoopHole Labs
https://loopholelabs.io/Databento
https://databento.com/https://www.linkedin.com/in/dvorkin/"Spaghetti Under the Table" - the first episode of the @CloudNativeAF podcast with @botchagalupe. Hosts @MattRay and @MRHinkle talk about John's history in DevOps, Chef, Docker, and the future of working in the cloud.
Check out our @CloudNativeAF podcast for engaging conversations about event driven architectures and cloud native infrastructure!
https://cloudnativeaf.com"O'Reilly's Tea & Scones" - the second @CloudNativeAF podcast episode with @jamesurquhart is now available. Hosts @MattRay and @MRHinkle dive into James' experiences in the cloud, his book Flow Architectures, and @timoreilly's homemade scones and jams.
The Future of Streaming and Event-Driven Integration,
AustralianSuper
Super: GPO Box 1901
Melbourne VIC 3001
Australia
Submission Song (rejected version)
Dear
Perfect
Worker Robot Droid
Chicago Swingers (rejected version)
Ambient 1 - Teenage Nites Revisited
Float
Pulsars Dub
Nervous Twitch
Ruled By Numbers
Romp And Frolic
Consumers
Silicon Teens (rejected version)
Zero Zero
Pacified (excerpt)
Count Off
Wisconsin
Tunnel Song
Suffocation
Owed to a Devil
Technology
Machine Talk
Silicon Teens
Save You
Lucky Day Part I
Lucky Day Part II
My Pet Robot
Runway
Submission Song
Tales From Tomorrow
Das Lifeboat
https://github.com/triggermesh/aktionhttps://github.com/triggermesh/anthos-pochttps://github.com/triggermesh/apidocs-genhttps://github.com/triggermesh/autobotshttps://github.com/triggermesh/aws-custom-runtimehttps://github.com/triggermesh/aws-event-sourceshttps://github.com/triggermesh/aws-kinesis-channelhttps://github.com/triggermesh/aws-sources-operatorhttps://github.com/triggermesh/awseventbridge-event-targethttps://github.com/triggermesh/azure-event-channelhttps://github.com/triggermesh/azure-runtimehttps://github.com/triggermesh/backendhttps://github.com/triggermesh/bridgeshttps://github.com/triggermesh/bringyourownhttps://github.com/triggermesh/bumblebeehttps://github.com/triggermesh/bureaucracy-managementhttps://github.com/triggermesh/c2fo-pochttps://github.com/triggermesh/chartshttps://github.com/triggermesh/chonkhttps://github.com/triggermesh/cisco-pochttps://github.com/triggermesh/CloudRunToKinesishttps://github.com/triggermesh/confighttps://github.com/triggermesh/dbothttps://github.com/triggermesh/deployhttps://github.com/triggermesh/dohttps://github.com/triggermesh/docshttps://github.com/triggermesh/DocShothttps://github.com/triggermesh/dx-pochttps://github.com/triggermesh/e2ehttps://github.com/triggermesh/event-sourceshttps://github.com/triggermesh/eventing-upgrade-checkhttps://github.com/triggermesh/eventlibhttps://github.com/triggermesh/eventstorehttps://github.com/triggermesh/eventstore-servershttps://github.com/triggermesh/exampleshttps://github.com/triggermesh/fe-pochttps://github.com/triggermesh/flaretokinesishttps://github.com/triggermesh/frontendhttps://github.com/triggermesh/FTK--RUST-WIP-https://github.com/triggermesh/functionhttps://github.com/triggermesh/ghbackendhttps://github.com/triggermesh/github-third-party-sourcehttps://github.com/triggermesh/homebrew-taphttps://github.com/triggermesh/ibm-mq-provisionerhttps://github.com/triggermesh/internshiphttps://github.com/triggermesh/knative-firebase-sourcehttps://github.com/triggermesh/knative-lambda-runtimehttps://github.com/triggermesh/knative-local-registryhttps://github.com/triggermesh/knative-sourceshttps://github.com/triggermesh/knative-targetshttps://github.com/triggermesh/knative-traininghttps://github.com/triggermesh/knative-upgrade-taskhttps://github.com/triggermesh/kobyhttps://github.com/triggermesh/ksampleshttps://github.com/triggermesh/mischttps://github.com/triggermesh/mq-eventsourcehttps://github.com/triggermesh/nodejs-runtimehttps://github.com/triggermesh/openfaas-runtimehttps://github.com/triggermesh/oracle-eventshttps://github.com/triggermesh/pipeline-taskshttps://github.com/triggermesh/pnc-pochttps://github.com/triggermesh/routerhttps://github.com/triggermesh/routinghttps://github.com/triggermesh/runtimehttps://github.com/triggermesh/runtime-build-taskshttps://github.com/triggermesh/terraform-provider-tmhttps://github.com/triggermesh/test-infrahttps://github.com/triggermesh/tilhttps://github.com/triggermesh/tmhttps://github.com/triggermesh/traininghttps://github.com/triggermesh/triggerflowhttps://github.com/triggermesh/triggermesh-operatorhttps://github.com/triggermesh/vscode-bridge-dlhttps://github.com/triggermesh/vsphere-sourcehttps://github.com/triggermesh/https://github.com/triggermesh/config
bureaucracy-management
koby
test-infra
awseventbridge-event-target
routing
triggerflow
event-sources
eventstore-servers
knative-targets
frontend
oracle-events
misc
cisco-poc
charts
chonk
pnc-poc
c2fo-poc
backend
github-third-party-source
ghbackend
router
vscode-bridge-dl
internship
fe-poc
dx-poc
anthos-poc
DocShot
apidocs-gen
training
autobots
eventlib
flaretokinesis
CloudRunToKinesis
FTK--RUST-WIP-
ibm-mq-provisioner
knative-upgrade-task
do
knative-firebase-source
deploy
e2e
examples
GKE cluster configuration
TriggerMesh User/Org Management Service
Integration components for everyone
A collection of utilities for testing the behaviour and performance of the TriggerMesh platform.
Event target for the SaaS partner integration with AWS EventBridge
Event sources for knative
Waving goodbye to events
TriggerMesh Management frontend
Sample event code and write-up for Oracle DB Events
A repository to keep track of miscelleanous company wide engineering issues
TriggerMesh/Cisco PoC
Charts for TriggerMesh Deployments
All TriggerMesh integrations, released as an Absolute Unit of a controller manager.
Triggermesh/PNC Proof of Concept
TriggerMesh for C2FO PoC
TriggerMesh backend
Triggermesh backend to register repos and deploy functions on knative
CloudEvents routing inside the Knative environment
An extension for TriggerMesh bridge description language
Pierre's Internship repository
DX / TriggerMesh Proof of Concept repo
UiPath Process that aims to update documentation screenshots.
API Documentation generation
Packages supporting event related operations
Cloudflare worker with authentication created to send JSON information to be digested in AWS Kinesis
THIS SPACE IS CURRENTLY UNDER DEVELOPMENT.
An IBM MQ ClusterChannelProvisioner for Knative Eventing
Kustomize Deployment on DigitalOcean
Knative Firefbase Event Sources
BDD-style tests that integrate different TriggerMesh components in real clusters
some knative examples
triggermesh/frontend TriggerMesh Management frontend
triggermesh/test-infra A collection of utilities for testing the behaviour and performance of the Trigge...
triggermesh/e2e-github-eventdisplay-3688 Generated by the TriggerMesh e2e test suite
triggermesh/config GKE cluster configuration
triggermesh/bureaucracy-management TriggerMesh User/Org Management Service
triggermesh/koby Integration components for everyone
triggermesh/awseventbridge-event-target Event target for the SaaS partner integration with AWS EventBridge
triggermesh/knative-sources Knative event sources controller public 20h
triggermesh/bumblebee CloudEvents Transformation engine public 20h
triggermesh/routing private 20h
triggermesh/triggerflow private 20h
triggermesh/aws-kinesis-channel An Event Channel Controller For AWS Kinesis public 20h
triggermesh/azure-event-channel A knative Channel Controller for Azure Event Hub public 20h
triggermesh/event-sources Event sources for knative private 20h
triggermesh/aws-custom-runtime Knative Function Using the AWS Lambda Runtime API public 20h
triggermesh/aws-event-sources Knative event sources for AWS services public 20h
triggermesh/eventstore-servers private 20h
triggermesh/function public 20h
triggermesh/knative-targets Waving goodbye to events private 20h
triggermesh/docs Documentation and Issues for
https://cloud.triggermesh.io public 2d
triggermesh/eventstore Stateful Events Store public 3d
triggermesh/til TriggerMesh Integration Language, interpreter and CLI public 4d
triggermesh/oracle-events Sample event code and write-up for Oracle DB Events private 6d
triggermesh/misc A repository to keep track of miscelleanous company wide engineering issues private 6d
triggermesh/cisco-poc TriggerMesh/Cisco PoC private 9d
triggermesh/charts Charts for TriggerMesh Deployments private 9d
triggermesh/knative-lambda-runtime Running AWS Lambda Functions on Knative/Kubernetes Clusters public 9d
triggermesh/tm TriggerMesh CLI to work with knative objects public 10d
triggermesh/dbot A Discord Bot for fun public 12d
triggermesh/chonk All TriggerMesh integrations, released as an Absolute Unit of a controller manager. private 12d
mattray@scruffy in ~/ws/
mattray.github.io on :master via ℜ:v2.6.3
$ gh repo list triggermesh --no-archived --private --source --limit 100
Showing 43 of 43 repositories in @triggermesh that match your search
triggermesh/e2e-github-eventdisplay-3688 Generated by the TriggerMesh e2e test suite private 7h
triggermesh/config GKE cluster configuration private 7h
triggermesh/bureaucracy-management TriggerMesh User/Org Management Service private 7h
triggermesh/koby Integration components for everyone private 14h
triggermesh/test-infra A collection of utilities for testing the behaviour and performance of t... private 2h
triggermesh/awseventbridge-event-target Event target for the SaaS partner integration with AWS EventBridge private 19h
triggermesh/routing private 20h
triggermesh/triggerflow private 20h
triggermesh/event-sources Event sources for knative private 20h
triggermesh/eventstore-servers private 20h
triggermesh/knative-targets Waving goodbye to events private 20h
triggermesh/frontend TriggerMesh Management frontend private 30m
triggermesh/oracle-events Sample event code and write-up for Oracle DB Events private 6d
triggermesh/misc A repository to keep track of miscelleanous company wide engineering issues private 6d
triggermesh/cisco-poc TriggerMesh/Cisco PoC private 9d
triggermesh/charts Charts for TriggerMesh Deployments private 9d
triggermesh/chonk All TriggerMesh integrations, released as an Absolute Unit of a controll... private 12d
triggermesh/pnc-poc Triggermesh/PNC Proof of Concept private 15d
triggermesh/c2fo-poc TriggerMesh for C2FO PoC private Jun 29, 2021
triggermesh/backend TriggerMesh backend private 21d
triggermesh/github-third-party-source private Jun 2, 2021
triggermesh/ghbackend Triggermesh backend to register repos and deploy functions on knative private 21d
triggermesh/router CloudEvents routing inside the Knative environment private Jun 1, 2021
triggermesh/vscode-bridge-dl An extension for TriggerMesh bridge description language private Jun 16, 2021
triggermesh/internship Pierre's Internship repository private May 5, 2021
triggermesh/fe-poc private Mar 11, 2021
triggermesh/dx-poc DX / TriggerMesh Proof of Concept repo private Jan 22, 2021
triggermesh/anthos-poc private Nov 10, 2020
triggermesh/DocShot UiPath Process that aims to update documentation screenshots. private Nov 6, 2020
triggermesh/apidocs-gen API Documentation generation private Oct 19, 2020
triggermesh/training private Oct 8, 2020
triggermesh/autobots private Jul 20, 2020
triggermesh/eventlib Packages supporting event related operations private Apr 15, 2020
triggermesh/flaretokinesis Cloudflare worker with authentication created to send JSON information t... private Mar 31, 2020
triggermesh/CloudRunToKinesis private Mar 23, 2020
triggermesh/FTK--RUST-WIP- THIS SPACE IS CURRENTLY UNDER DEVELOPMENT. private Feb 17, 2020
triggermesh/ibm-mq-provisioner An IBM MQ ClusterChannelProvisioner for Knative Eventing private May 17, 2019
triggermesh/knative-upgrade-task private Apr 3, 2019
triggermesh/do Kustomize Deployment on DigitalOcean private Mar 25, 2019
triggermesh/knative-firebase-source Knative Firefbase Event Sources private Mar 2, 2019
triggermesh/deploy private Feb 13, 2019
triggermesh/e2e BDD-style tests that integrate different TriggerMesh components in real ... private Jan 31, 2019
triggermesh/examples some knative examples private Oct 19, 2018
aktion
aws-custom-runtime
aws-event-sources
aws-kinesis-channel
aws-sources-operator
azure-event-channel
azure-runtime
bridges
bringyourown
bumblebee
dbot
docs
eventing-upgrade-check
eventstore
function
homebrew-tap
knative-lambda-runtime
knative-local-registry
knative-sources
knative-training
ksamples
mq-eventsource
nodejs-runtime
openfaas-runtime
pipeline-tasks
runtime
terraform-provider-tm
til
tm
triggermesh-operator
vsphere-source
Translates GitHub Actions into Tekton and Knative Objects
Knative Function Using the AWS Lambda Runtime API
Knative event sources for AWS services
An Event Channel Controller For AWS Kinesis
TriggerMesh Sources for Amazon Web Services
A knative Channel Controller for Azure Event Hub
Running Functions in a Azure Function runtime with Knative
Curated set of Eventing Bridges
Bring Your Own Sources, Targets and Transformations
CloudEvents Transformation engine
A Discord Bot for fun
Documentation and Issues for
https://cloud.triggermesh.ioUpgrade check for Knative Eventing. v0.13.x --> v0.14.x
Stateful Events Store
A Homebrew Tap for TriggerMesh tools
Running AWS Lambda Functions on Knative/Kubernetes Clusters
Explores the options for
https://github.com/knative/serving/issues/23Knative event sources controller
A Knative training ala TGIK
Knative serverless examples
An IBM MQ Knative Event Source
Build template and source examples for Knative functions in nodejs
Openfaas runtimes to deploy knative services
Building blocks for your CI/CD using Knative Build Pipeline with TriggerMesh
Triggermesh clusterbuildtemplates
Terraform plugin for knative resources by TriggerMesh
TriggerMesh Integration Language, interpreter and CLI
TriggerMesh CLI to work with knative objects
An OpenShift Operator for TriggerMesh
A Knative Event Source for VMware vSphere
pipeline-tasks
nodejs-runtime
Level 21 500 Collins Street
Melbourne VIC 3000, Australia
Organization/Company: TriggerMesh
Website:
https://triggermesh.comCountry: USA
Contact: @TriggerMesh
Usage scenario: Built on Knative, the TriggerMesh integration platform connects data and events from virtually any application or platform to any other, in the cloud or on-premises.
Status: Production
Organization/Company: TriggerMesh
Website:
https://triggermesh.comCountry: USA
Contact: @TriggerMesh
Usage scenario: Built on Knative, the TriggerMesh integration platform connects data and events from virtually any application to any other, in the cloud or on-premises.
Status: Production
Organization/Company: IBM
Project/Product Name: IBM Cloud Code Engine
Website:
https://cloud.ibm.com/codeengineCountry: USA
Contact: @IBMCodeEngine
Usage scenario: Code Engine is a managed cloud-based hosting service. We host containerized workloads and for web-serving ones we use Knative Serving as the infrastructure. We are also using Knative Eventing as the basis for our eventing support.
Status: Production
DevOps.com article
July Product Update
Open Source Strategy
Oracle Meetup
Operational Efficiency with Cloud-Native Integration review
Update The DevOps Guide to Integrating with Public Clouds]
Stickers
Apache 2.0
The Apache 2.0 license allows source code to be used with no restrictions other than attribution and patent protection. This is the open source license used by Knative and most CNCF projects because it is considered to be the most business-friendly for contributing and consuming.
Berkeley Software Distribution (BSD)
The BSD license is considered one of the most permissive software licenses. It is similar to the Apache license without the patent protection provisions.
GNU General Public License (GPL)
The GPL is a strong copyleft license that requires open sourcing any modifications to the code and may not be linked against without open sourcing. The GPL was frequently used by sole copyright holders to restrict commercial competition by being the only allowed commercial offering with a dual-commercial license. Many businesses avoid GPL-licensed software because of its "viral" nature.
GNU Affero General Public License v3.0 (AGPL)
The AGPL is a strong copyleft license that expands on the GPL by requiring managed offerings and cloud-provided services to also open source their changes. This is the license chosen by Grafana Labs because they are the sole copyright holder and can provide hosted versions of their software without having to open source their software while blocking potential competitors.
Business Source License (BSL)
BSL features are free to use and the source code is available, but users may not use the product as a service without an agreement with TriggerMesh. The BSL is not certified as an open-source license, but most of the Open Source Initiative (OSI) criteria are met. Cockroach Labs uses the BSL with a 3-year Apache 2.0 expiration.
Cockroach Community License (CCL)
The source code is available to view and modify, but it cannot be reused without an agreement with Cockroach Labs.
Server Side Public License (SSPL)
The SSPL is used by MongoDB and is an extension to the GPL license similar to the AGPL but restricting commercial cloud competitors. It is not considered an open source license by the OSI but it has been adopted by Elastic as well.
Redis Source Available License (RSAL)
Redis Labs uses the RSAL for their extensions to Redis that restrict competitive "database products" built on top of BSD-licensed Redis.
Apache 2.0 modified with Commons Clause
This license was introduced by Redis Labs to prohibit commercial versions of open source software without permission from the original authors. It is not considered an open source license and Redis Labs eventually abandoned it in favor of their own Redis Source Available License which was less restrictive.
Commercial License
This is traditional enterprise software. We may determine any source code availability we want, but users may not use the product without a commercial agreement in place. Trial licenses may be offered to give limited free access to the product.
Applying Trademark
Trademarks may be enforced over open source software to restrict distribution of binaries and tools that use the trademarked names. This allows for the software to be completely open source while restricting distribution of the software without licenses. This is the approach used by Red Hat Linux (GPL) and Chef Software (Apache 2.0), where the software is completely open source but usage requires accepting a commercial license for trademarked content (RHEL and Chef Infra respectively). Community distributions (CentOS and CINC) use the same source code with trademarked content removed.
License Enforcement
If we decide to restrict access to binaries or source code behind trademarks or commercial licenses, license enforcement is a frequent consideration. The reality is that most enterprises are very sensitive to running unlicensed code and I don't believe we should restrict usage other than requiring users to acknowledge the license.
## Description
<!--- Please let us know what needs to be fixed -->
## Screenshots
<!--- Something not looking right? Please upload a screenshot and provide browser details -->
## Additional context
<!--- If there's anything else relevant, please let us know -->
```
```
**Describe the bug**
Please A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: e.g. iOS]
- Browser e.g. chrome, safari]
- Version e.g. 22]
**Smartphone (please complete the following information):**
- Device: e.g. iPhone6]
- OS: e.g. iOS8.1]
- Browser e.g. stock browser, safari]
- Version e.g. 22]
**Additional context**
Add any other context about the problem here.
## Stacktrace
<!--- Please include the stacktrace.out output or link to a gist of it, if there is one. -->
Cloud agnostic, thanks to the Kubernetes foundation, can run on Anthos, Openshift, Tanzu, OKE etc...In the cloud, on-premises, self-managed
Extensible via a plug-in architecture that can be extended by TriggerMesh or open source contributors
Leveraging open source software from Google/VMware/Red Hat and others to bring products to market faster
Why your integration platform should be Kubernetes-based
This would make the value prop argument for Kubernetes users on things like how our API is an extension of the K8s API, so simple to learn, you can manage TriggerMesh the same way you manage other distributed apps, what else?
API is an extension of the Kubernetes API = nothing new to learn
I’d like to really explore the topic of cloud native apps and make the case that
Growth of microservices/cloud native apps/servleress means the need to integrate grows exponentially (see image on page 2 in this guide. And you can also pull a lot of language from this guide as well)
the only logical way to integrate this expanding galaxy of services is cloud natively
Why
How
James Urquhart
Event-driven Applications
Data Analytics Applications
Data Pipeline Applications
Matt Ray
ma...@mattray.dev+610457231372+151273122183 Use Cases of Applications/Architectures
Event Driven Architecture (Event Flows)
Data Analytics
Data Pipelines
Cloud Native Integrations
Data and Event Integrations
We bring them all together with BridgeDL and deploy on Kubernetes anywhere in the world
Are often manual (slow)
Generate too much data from scanning-oriented approaches
Catch problems too late in the development cycle to economically fix
Don't manage exceptions appropriately
Building an External 9.7" Monitor from an iPad
The Eggnoggin Toboggan feat. Briggs
Vet records
Socialization
Desex rebate?
Size?
We are interested in your Beaglier puppy. We live in Sydney and would love to get the dog on Sunday if available. Our labrador died last month and we are very familiar with providing a home for a dog and have 3 children (11, 13 & 17) who are eager to get a new dog.
Hi, I'm interested in the ad for "Beaglier Male". Has the animal had all of its shots and do you have all documents? When can I meet "Beaglier Male"?
I am really looking forward to discussing the opportunity with John but I am unfortunately unavailable at those times. I am available anytime next week after 3pm Central.
next week like to reschedule an interview with you if possible to a later date. I am available give three of four dates and times over the next few weeks].
We are interested in your Border Collie x Golden Retriever puppy. We're in Sydney but I would gladly come up to pick him up. Is he already desexed? Our labrador cross died last month and we are very familiar with providing a home for a dog and have 3 children (11, 13 & 17) who are eager to get a new dog.
Border Collie x Golden Retriever
Hi, I'm interested in the ad for "Labrador cross 9 months puppy". Has the animal had all of its shots and do you have all documents? When can I meet "Labrador cross 9 months puppy"?
Chicken Lakhsa
Chow Mein BBQ Pork
Veg singapore noodle
Veg fried rice
Veg spring rolls
Crispy calamari
The reduced
First have read only views of cookbooks, roles, enviornments, databags and clients:
Then have edit views of roles, enviorments, databags and clients
Have the node views: both read + edit - manage tags, edit node runlist, and other existing Manage features for nodes.
This will be the focus for next couple of months.
Read-only views of cookbooks, roles, environments, databags and clients are expected this Q1.
Editing of roles, environments, databags and clients will be tentatively released in that order.
The node views with both read & edit, managing tags, edit node runlist, and other existing Manage features for nodes are scheduled for this year.
katsu curry
teriyaki chicken don
yaki udon
vegetarian rol
l
teriyaki salmon roll
vegetable spring roll
edamame
okura tempura
sweet potato tempura
inari sushi
veg gyoza or avo tempura
rice
miso soup
Against my better judgement I signed up for the Sun Run 10k this Saturday. I'm raising money for SAFE Animal Rehoming if you'd like to donate:
https://sunrun2021.grassrootz.com/saferehoming/matt-ray/The chef_client_trusted_certificate resource adds certificates to Chef Infra Client’s trusted certificate directory, allowing the Chef Infra Client to communicate with internal encrypted resources without errors.
The chef_client_launchd resource for macOS, the chef_client_scheduled_task resource for Windows, and the chef_client_systemd_timer resource for Linux all schedule running the Chef Infra Client on their respective platforms.
to setup the Chef Infra Client to run as a systemd timer.
resource to configure the Chef Infra Client to run on a schedule on macOS systems.
to setup the Chef Infra Client to run as ascheduled task.
chef_client_config resource
creates a client.rb file in the Chef Infra Client configuration directory with your specified settings.
customer reporting inspec http stopped working after upgrading from Chef 15.6.10 to 16.7.61 (4.18.39 to 4.23.1)
```
describe http('<url>', method: 'POST', headers: { 'Content-Type' => 'application/json' }, ssl_verify:false) do
its('status') { should cmp 200 }
end
```
```Failed to load source for <file name>: undefined method `http' for #<Inspec::ControlEvalContext:0x0000000002ec1498>```
Has this already been fixed? I don't see anything in the CHANGELOG that looks promising, so I'm hesitant to ask them to upgrade.
Waffle fries
BenBry Herb Aioli (gf)
Sweet potato wedges
Smokey BBQ
Onion rings
Wasabi Mayo
2X Hangover Cure (V)
BenBry Cheddar
Le Coq
Double BenBry Cheddar - no salad
Need to refill an allopurinol prescription and get a new maxalt prescription for occasional migraines
Enclosed are the Jabra Elite Active 75t True Wireless Bluetooth Sports Earbuds, Amazon Order ID: 503-2504472-3778218 that I was instructed to return to you. These are being returned because they worked for a month when the right earbud's audio started cutting out and then stopped completely. I opened an issue with Jabra's support (ticket number #CS01341643) and they instructed me to contact Amazon who put me back in contact with you. Either a refund or a replacement set would be acceptable.
Thanks,
Matt Ray
matth...@gmail.com0457231372
Jabra Support Consumer <
consumersup...@mailer1.jabra.com>
Wed, Dec 16, 2020, 6:26 PM
to me
Dear Customer,
Thank you for submitting your support request. The Jabra Support team will be in touch shortly.
Your .
You have received a message from the Amazon Seller - XtremeOnline
Count Product Name and ASIN
1 product image Jabra Elite Active 75t True Wireless Bluetooth Sports Earbuds, Compact Design, 28 Hours Battery, Charging Case Included - Navy
ASIN: B083WXQSJZ
Hi Matthew:
Thank you for shopping with us. Amazon seller XtremeOnline would like to follow up on your recent return.
Message from seller XtremeOnline:
Hi Matthew,
Please send the Earbuds back within 5 business days at the address below:
Xtreme Communications
PO BOX 4129
Robina Town Centre
QLD 4230
We will assess your return item for the problem once received and advise you of the outcome after assessing the returned item in 7 business days once we received.
Thank you
Regards,
Xtreme Team
You can send a message to the seller XtremeOnline by replying to this email.
You can review Amazon's return policies by clicking here. If you are dissatisfied with your return experience, please contact us. If you were contacted inappropriately by the seller, please report this message.
We hope to see you again soon.
Execution Error:undefined method `where' for /etc/shadow:#<Class:0x0000000004a6c418>
You can do it using the chef-server-ctl add-client-key command for the validator. You will then have two valid validator keys for that client. Once you've tested the new one's function you can remove the original key named "default" with the delete-client-key
subcommand. View keys with chef-server-ctl list-client-keys
chef-server-ctl delete-user-key #{user_name} default
chef-server-ctl user-create #{user_name} #{user_first_name} #{user_last_name} #{user_email} #{user_pass} -f #{user_key}
chef-server-ctl delete-user-key USER_NAME KEY_NAME
chef-server-ctl add-user-key
834 Pittwater Rd, #301, Dee Why NSW 2099
1600 West 38th St. #312 Austin, TX 78731, USA
They sent a letter explaining the payout, but here's the rough breakdown:
Common stock closing price: $1.0370 per share paid now
Estimated escrow price: $0.1577 per share paid after settling debts (I don't remember if it was 3 or 6 months)
Hopefully you got a good return on your investment.
{
"uuid": "ad40cc95-409e-416a-a4d6-2f4b7718549b",
"name": "Tanjirooooo"
},
{
"uuid": "46fcf6f9-63fd-4686-9984-ccd94e802959",
"name": "Aidomo",
}
]
{
"uuid": "f5e756ee-37eb-4971-b239-3e148e44dae8",
"name": "bluetackistasty",
"created": "2020-11-03 04:31:53 +0000",
"source": "LokiTheCrusader",
"expires": "forever",
"reason": "Banned by an operator."
},
{
"uuid": "46fcf6f9-63fd-4686-9984-ccd94e802959",
"name": "Aidomo",
"created": "2020-11-13 21:07:59 +0000",
"source": "LokiTheCrusader",
"expires": "forever",
"reason": "Banned by an operator."
}
]
13.11 22:54:42 Disconnect] User com.mojang.authlib.GameProfile@268b2c0aid=91042538-4913-443e-aa41-b3154682aa9b,name=LeastResistance,properties={textures=com.mojang.authlib.properties.Property@27871b45]},legacy=false] (/
119.18.3.31:26446)
13.11 22:54:42 Server] User Authenticator #1/INFO UUID of player LeastResistance is 91042538-4913-443e-aa41-b3154682aa9b
has disconnected, reason: You are not white-listed on this server!
It looks like the path would be to:
1) Perform Automate backup
2) Export current configuration
3) Shut down Automate
4) Write new config patch for external Elasticsearch 'external-es.toml'
5) Restore with something like this:
chef-automate backup restore --debug --airgap-bundle update.aib --patch-config external-es.toml --no-check-version "$test_backup_id"
# Terraform resource
#Virtual Machine Resources
resource "azurerm_virtual_machine" "vm" {
name = var.vm-name
location = azurerm_resource_group.vm-rg.location
resource_group_name =
azurerm_resource_group.vm-rg.name network_interface_ids =
azurerm_network_interface.nic.id]
vm_size = var.vm-size
tags = var.vm-tags
os_profile {
computer_name = var.vm-name
admin_username = var.vm-username
admin_password = var.vm-password
#Cloud-Init Chef Configuration
#Uses the cloud-init template located in the module dir
#
#For RHEL images that support cloud-init see:
https://docs.microsoft.com/en-us/azure/virtual-machines/linux/using-cloud-init custom_data = base64encode(templatefile("${path.module}/cloud-init.tmpl",
{
chef-node-name = var.chef-node-name,
chef-server-url = var.chef-server-url,
chef-policy-group = var.chef-policy-group,
chef-policy-name = var.chef-policy-name,
chef-ssl-verifymode = var.chef-ssl-verifymode,
chef-omnibus-version = var.chef-omnibus-version,
chef-validation-name = var.chef-validation-name,
chef-validation-cert = var.chef-validation-cert
}))
}
# cloud-init installs chef-client
# Content of cloud-init.tmpl:
#cloud-config
chef:
install_type: "packages"
node_name: "${chef-node-name}"
server_url: "${chef-server-url}"
environment: ""
log_location: "/etc/chef/chef.log"
ssl_verify_mode: "${chef-ssl-verifymode}"
validation_name: "${chef-validation-name}"
validation_cert: "${chef-validation-cert}"
runcmd:
- sh, -c, echo -e "policy_group \"${chef-policy-group}\"\npolicy_name \"${chef-policy-name}\"\nchef_license \"accept\"\n" >> /etc/chef/client.rb]
- chef-client]
output: {all: '| tee -a /var/log/cloud-init-output.log'}
we had to update client.rb file ourselves as
https://cloudinit.readthedocs.io/en/20.3/topics/modules.html#chef doesn't look to support policyfiles directly but no big deal.
Workers Cover Plus with $0 Excess and Comprehensive Extras
Temporary Skill Shortage (subclass 482)
August 27, 2020 - August 27, 2021
732-094
711500
986811914
We have a customer who is discussing rolling Automate out to a large number of teams and nodes and has questions about UI scaling. What would we consider allowed concurrent UI access to Automate for reporting with 20 Chef Infra Servers and 100k desktop nodes? They envision many disparate teams having dashboards up to watch their nodes and are curious if we have any guidance on this.
Do we have a
1. server node count scaling up to 20k from 2k
2. node counts scaling up to 20 server + 100k desktop
3. would it be a linear percentage of the total number of nodes
s
DEBUG: Converging node
nzakdot0029rzrw.elinux.westpac.co.nzRecipe: effortless-start::default
* hab_packagecore/hab-launcher] action install2020-09-15T13:38:13+12:00] INFO: Processing hab_packagecore/hab-launcher] action install (effortless-start::default line 7)
/root/.chef/local-mode-cache/cache/cookbooks/habitat/libraries/provider_hab_package.rb:139: warning: constant Net::HTTPServerException is deprecated
hab
pkg
path
core/hab-launcher
✗✗✗
✗✗✗ Cannot find a release of package: core/hab-launcher
✗✗✗
ERROR HERE
* No candidate version available for core/hab-launcher
================================================================================
Error executing action `install` on resource 'hab_packagecore/hab-launcher]'
================================================================================
Chef::Exceptions::Package
-------------------------
No candidate version available for core/hab-launcher
Resource Declaration:
---------------------
# In /root/.chef/local-mode-cache/cache/cookbooks/effortless-start/recipes/default.rb
7: hab_package 'core/hab-launcher' do
8: bldr_url '
https://nzakdot0098rzrw-front.elinux.westpac.co.nz/bldr/v1'
9: end
10:
Compiled Resource:
------------------
# Declared in /root/.chef/local-mode-cache/cache/cookbooks/effortless-start/recipes/default.rb:7:in `from_file'
hab_package("core/hab-launcher") do
package_name "core/hab-launcher"
action :install]
default_guard_interpreter :default
declared_type :hab_package
cookbook_name "effortless-start"
recipe_name "default"
bldr_url "
https://nzakdot0098rzrw-front.elinux.westpac.co.nz/bldr/v1"
end
System Info:
------------
chef_version=16.4.41
platform=redhat
platform_version=7.8
ruby=ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) x86_64-linux]
program_name=/bin/chef-client
executable=/opt/chef-workstation/bin/chef-client
2020-09-15T13:38:13+12:00] INFO: Running queued delayed notifications before re-raising exception
Running handlers:
2020-09-15T13:38:13+12:00] ERROR: Running exception handlers
Running handlers complete
2020-09-15T13:38:13+12:00] ERROR: Exception handlers complete
Chef Infra Client failed. 0 resources updated in 02 seconds
2020-09-15T13:38:13+12:00] FATAL: Stacktrace dumped to /root/.chef/local-mode-cache/cache/chef-stacktrace.out
2020-09-15T13:38:13+12:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
2020-09-15T13:38:13+12:00] DEBUG: Chef::Exceptions::Package: hab_packagecore/hab-launcher] (effortless-start::default line 7) had an error: Chef::Exceptions::Package: No candidate version available for core/hab-launcher
https://www.youtube.com/results?search_query=utep+football+2020&sp=EgIYAg%253D%253DHere are some of the issues that Fastly has identified/worked around with regards to Policyfiles:
chef install requires .rb file present
https://github.com/chef/chef-workstation/issues/1292chef install ignores chef_server source
https://github.com/chef/chef-workstation/issues/1431Profile support across "chef" commands in inconsistent
https://github.com/chef/chef-workstation/issues/1261Need ability to "copy" POLICY_FILE from one POLICY_GROUP to another
https://github.com/chef/chef-workstation/issues/844I'm also investigating issues related to multiple "chef install" commands running simultaneously producing inconsistent results and validating if "chef install" respects .chefignore files properly.
The architecture and design decisions that we make in Elasticsearch are based on certain assumptions, including the assumption that nodes are located on a local network. This is the use case that we optimize and extensively test for, because this is the environment that the vast majority of our users operate in. Network disruptions are much more common across WAN links, between geographical distributed DCs. Even if there is a dedicated link between DCs.
Elasticsearch is built to be resilient to networking disconnects, but that resiliency is intended to handle the exception, not the norm. Running a single Elasticsearch cluster that spans multiple DCs is not a scenario we test for and there are a number of additional reasons why it is not a recommended or supported practice we will go into below. (Note: On GCE and AWS, running a cluster across zones within a single region is supported)
Expect the Unexpected
* High Latency: Latency is a problem in distributed systems. High latency slows indexing because the indexing request is processed on the primary shard first and then sent to all the replicas for indexing, and all cluster-wide communications (eg. cluster state updates) in Elasticsearch.
* Limited or Unreliable Connectivity: If connectivity between nodes in a cluster is momentarily lost, it’s likely that remote shards will be out of date and any single update processed while in disconnected state will invalidate all content held on isolated replicas. This means that Elasticsearch requires the copying of these out of date shards to sync up replicas from their primaries to ensure consistency of data and search responses. Sending full shards for multiple indices may overwhelm a WAN based connection or cause considerable slowdown, leaving your cluster in a degraded state for an extended period of time.
* Data Availability: Assuming the correct setting of discovery.zen.minimum_master_nodes, in the event of a network disconnect between two or more DCs, only the DC with the elected master node will remain active. This can cause many issues for applications in the different DCs which may be attempting to index new data, as the nodes not part of the active cluster will reject any attempted writes. This also provides a challenge with cluster sizing. When the link between the two DCs is broken, the active half of the cluster will need to bear the full load of indexing and queries for all requests. When the link is restored, these nodes will also be pushing data and documents across the network while still handling the full indexing and request load. This necessitates larger or more powerful clusters to ensure enough CPU and IOPS to maintain acceptable performance during such events.
Furthermore, distributing nodes across DCs is not a replacement for a DR site, OR a replacement for proper backups (if this is what they are thinking ). To counter the idea above by measuring latency: it’s not necessarily the latency itself that is the problem, but rather the behavior of failures across WAN links vs local and how writes get distributed across the cluster under degraded or failure conditions.
- In terms of DR, 3 Postgre nodes of one single cluster to be geographically dispersed across two DCs.
No, to my understanding, they CANNOT geographically distribute the cluster, it is sensitive to latency on the write side of things. Clusters need to be in the same datacenter for on prem, for cloud, they need to be in the same region, distributed across AZs.
- Could it be deployed via Kubernates?
At the moment, no, we do not have a Kubernetes deployment topology. I don’t see any plans in the near future to work on a Kube deployment topology (we’re working on cloud native right now)
- YJ already making use of Kubes in PROD so preferrably Kubes not other container tech.
We don’t have any plans to use containers to deploy Chef Automate Cluster for now.
- Automate HA Cluster Cookbook development.
- Could chef prep some even before the initial deployment?
We don’t use cookbooks to configure or deploy Chef Automate Cluster. It’s built entirely using Habitat, and provisioned with Terraform via ssh and rsync.
- Permitted operations:
- Basically all of what Chef might ask during PS are permitted but yj double-checking.
- Screen share
- Log dump share
- Node access (double-checking)
- Things that yj cannot share are Cookbooks and Databags.
This seems fine. All of our engagements with Chef Automate Cluster are screen share with someone from the customer driving, while we give instructions. (edited)
echo = "foxtrot"
golf = "hotel"
kilo = "lima", "mike", "november"]
alpha]
bravo = 10
charlie = "delta"
october = "pegasus"
india]
juliett = true
echo = "foxtrot"
golf = "hotel"
kilo = "november", "lima", "mike"]
alpha]
bravo = 10
charlie = "delta"
india]
juliett = true
oscar = 14
Search: in:@kisoku vers
1
1
2
3
4
5
6
7
8
9
Chef Success
Matt Ray
:flag-au:
Threads
Mentions & reactions
Show more
Invite people
Mathieu Sauve-Frankel
Details
5:28
cache_bug_log.txt
Started by GitHub push by kisoku
Running as SYSTEM
EnvInject] - Loading node environment variables.
Building remotely on prodeng-rc-worker01 (rc_cookbook_builder rc_repo_builder) in workspace /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo
using credential github-ssh-key
Cloning the remote Git repository
Cloning repository g...@github.com:fastly-def/cfgmgmt-chef-repo.git
> git init /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo # timeout=10
Fetching upstream changes from g...@github.com:fastly-def/cfgmgmt-chef-repo.git
> git --version # timeout=10
using GIT_SSH to set credentials SSH private key used to interact with GitHub
> git fetch --tags --progress g...@github.com:fastly-def/cfgmgmt-chef-repo.git +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url g...@github.com:fastly-def/cfgmgmt-chef-repo.git # timeout=10
> git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
> git config remote.origin.url g...@github.com:fastly-def/cfgmgmt-chef-repo.git # timeout=10
Fetching upstream changes from g...@github.com:fastly-def/cfgmgmt-chef-repo.git
using GIT_SSH to set credentials SSH private key used to interact with GitHub
> git fetch --tags --progress g...@github.com:fastly-def/cfgmgmt-chef-repo.git +refs/heads/*:refs/remotes/origin/* # timeout=10
Seen branch in repository origin/kisoku/initial_policies
Seen branch in repository origin/master
Seen 2 remote branches
> git show-ref --tags -d # timeout=10
Checking out Revision df077a65d79f5e35875bf7eda497a2d4bac8d246 (origin/kisoku/initial_policies)
> git config core.sparsecheckout # timeout=10
> git checkout -f df077a65d79f5e35875bf7eda497a2d4bac8d246 # timeout=10
> git branch -a -v --no-abbrev # timeout=10
> git checkout -b kisoku/initial_policies df077a65d79f5e35875bf7eda497a2d4bac8d246 # timeout=10
Commit message: "add initial base policies, and chef_cluster_bastion"
First time build. Skipping changelog.
Cleaning workspace
> git rev-parse --verify HEAD # timeout=10
Resetting working tree
> git reset --hard # timeout=10
> git clean -fdx # timeout=10
Set GitHub commit status (universal)] PENDING on repos GHRepository@2f8b43fcnodeId=MDEwOlJlcG9zaXRvcnkyODgzMTQ0MzQ=,description=<null>,homepage=,name=cfgmgmt-chef-repo,fork=false,archived=false,size=8,milestones={},language=Ruby,commits={},source=<null>,parent=<null>,responseHeaderFields={null=HTTP/1.1 200 OK], Access-Control-Allow-Origin=*], Access-Control-Expose-Headers=ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, Deprecation, Sunset], Cache-Control=private, max-age=60, s-maxage=60], Content-Encoding=gzip], Content-Security-Policy=default-src 'none'], Content-Type=application/json; charset=utf-8], Date=Tue, 18 Aug 2020 07:23:51 GMT], ETag=W/"1d5affdb6e8e2d43c11178a8850c849f"], Last-Modified=Tue, 18 Aug 2020 00:19:06 GMT], OkHttp-Received-Millis=1597735431201], OkHttp-Response-Source=CONDITIONAL_CACHE 200], OkHttp-Selected-Protocol=http/1.1], OkHttp-Sent-Millis=1597735430928], Referrer-Policy=origin-when-cross-origin, strict-origin-when-cross-origin], Server=GitHub.com], Status=200 OK], Strict-Transport-Security=max-age=31536000; includeSubdomains; preload], Transfer-Encoding=chunked], Vary=Accept, Authorization, Cookie, X-GitHub-OTP, Accept-Encoding, Accept, X-Requested-With, Accept-Encoding], X-Accepted-OAuth-Scopes=repo], X-Content-Type-Options=nosniff], X-Frame-Options=deny], X-GitHub-Media-Type=github.v3; format=json], X-GitHub-Request-Id=8E96:48AD:47F6E2:5A8E25:5F3B8206], X-OAuth-Scopes=admin:repo_hook, repo], X-RateLimit-Limit=5000], X-RateLimit-Remaining=4999], X-RateLimit-Reset=1597739031], X-XSS-Protection=1; mode=block]},url=
https://api.github.com/repos/fastly-def/cfgmgmt-chef-repo,id=288314434]] (sha:df077a6) with context:jenkins
Setting commit status on GitHub for
https://github.com/fastly-def/cfgmgmt-chef-repo/commit/df077a65d79f5e35875bf7eda497a2d4bac8d246cfgmgmt-chef-repo] $ /bin/bash /tmp/jenkins6065397057860609663.sh
+ set -e
+ '' origin/kisoku/initial_policies == origin/master ']'
+ rc_test_repo
INFO: Running Inspec
inspec/users_spec.rb:20: warning: constant ::Fixnum is deprecated
Profile: tests from inspec (tests from inspec)
Version: (not specified)
Target: local://
No tests executed.
Test Summary: 0 successful, 0 failures, 0 skipped
INFO: Skipping RSpec
INFO: Skipping /environments as it is not present
INFO: Skipping /roles as it is not present
+ rc_test_policies
INFO: Installing policy chef_cluster_bastion to policy_group production from lockfile: /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo/policies/production/chef_cluster_bastion.lock.json
Installing cookbooks from lock
Installing apt 7.2.0
Installing audit 9.4.0
Installing aws 8.2.0
Installing chef-client 11.5.0
Installing chef-sugar 5.1.8
Installing chef-vault 4.0.0
Installing chef_client_updater 3.8.4
Installing cron 6.2.2
Installing fst_app_chef_cluster 0.1.13
Error: Failed to install cookbooks from lockfile
Reason: (Net::HTTPServerException) HTTP 404 Object Not Found: Cannot find a cookbook named fst_app_chef_cluster with version 0.1.13
INFO: Installing policy chef_cluster to policy_group production from lockfile: /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo/policies/production/chef_cluster.lock.json
Installing cookbooks from lock
Using apt 7.2.0
Using aws 8.2.0
Using chef-sugar 5.1.8
Using chef-vault 4.0.0
Installing fst_app_chef_cluster 0.1.13
Error: Failed to install cookbooks from lockfile
Reason: (Net::HTTPServerException) HTTP 404 Object Not Found: Cannot find a cookbook named fst_app_chef_cluster with version 0.1.13
INFO: Installing policy base to policy_group production from lockfile: /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo/policies/production/base.lock.json
Installing cookbooks from lock
Using apt 7.2.0
Using audit 9.4.0
Using chef-client 11.5.0
Using chef-sugar 5.1.8
Using chef-vault 4.0.0
Using chef_client_updater 3.8.4
Using cron 6.2.2
Installing fst_app_monitoring 1.0.89
Installing fst_apt 1.0.24
Installing fst_base 1.0.149
Installing fst_lib_apt_exporter 0.1.4
Installing fst_lib_blackbox_exporter 0.1.3
Installing fst_lib_bolt 0.1.4
Installing fst_lib_capsule8 0.1.22
Installing fst_lib_datadog_agent 1.0.13
Installing fst_lib_ebpf_exporter 0.1.3
Installing fst_lib_ethtool_exporter 0.1.3
Installing fst_lib_ganglia_monitor 1.0.14
Installing fst_lib_hashicorp_ingredient 0.1.3
Installing fst_lib_i_love_systemd 1.0.37
Installing fst_lib_ipfixexport 0.1.8
Installing fst_lib_kitchen_sink_exporter 0.1.1
Installing fst_lib_node_exporter 0.1.34
Installing fst_lib_osquery 0.1.10
Installing fst_lib_ossec 1.0.21
Installing fst_lib_process_exporter 0.1.10
Installing fst_lib_promsd 0.1.20
Installing fst_lib_rsyslog 1.0.29
Installing fst_lib_ss_exporter 0.1.2
Installing fst_lib_static_route 0.1.1
Installing fst_lib_sysinventory 0.1.4
Installing fst_lib_telemetryd 0.1.13
Installing fst_lib_vault 0.1.9
Installing fst_lib_vaultly 1.0.13
Installing fst_ohai_configly 1.0.16
Installing fst_ohai_ipam 1.0.29
Installing git 10.0.0
Installing hostsfile 3.0.1
Installing iptables 7.0.0
Installing logrotate 2.2.2
Installing ntp 3.7.0
Installing ohai 5.3.0
Installing openssh 2.8.1
Installing perl 7.0.1
Installing poise 2.8.2
Installing poise-archive 1.5.0
Installing poise-languages 2.1.2
Installing poise-python 1.7.0
Installing poise-tls-remote-file 1.0.1
Installing ssh_known_hosts 7.0.0
Installing sudo 5.4.5
Installing ubuntu 3.0.3
ERROR: Failures encountered while validating policy_groups
ERROR: Failures for lockfile /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo/policies/production/chef_cluster_bastion.lock.json:
ERROR: Expected process to exit with 0], but received '1'
---- Begin output of "chef", "install", "chef_cluster_bastion.lock.json"] ----
STDOUT: Installing cookbooks from lock
Installing apt 7.2.0
Installing audit 9.4.0
Installing aws 8.2.0
Installing chef-client 11.5.0
Installing chef-sugar 5.1.8
Installing chef-vault 4.0.0
Installing chef_client_updater 3.8.4
Installing cron 6.2.2
Installing fst_app_chef_cluster 0.1.13
STDERR: Error: Failed to install cookbooks from lockfile
Reason: (Net::HTTPServerException) HTTP 404 Object Not Found: Cannot find a cookbook named fst_app_chef_cluster with version 0.1.13
---- End output of "chef", "install", "chef_cluster_bastion.lock.json"] ----
Ran "chef", "install", "chef_cluster_bastion.lock.json"] returned 1
ERROR: Failures for lockfile /home/jenkins/workspace/fastly-def/cfgmgmt-chef-repo/policies/production/chef_cluster.lock.json:
ERROR: Expected process to exit with 0], but received '1'
---- Begin output of "chef", "install", "chef_cluster.lock.json"] ----
STDOUT: Installing cookbooks from lock
Using apt 7.2.0
Using aws 8.2.0
Using chef-sugar 5.1.8
Using chef-vault 4.0.0
Installing fst_app_chef_cluster 0.1.13
STDERR: Error: Failed to install cookbooks from lockfile
Reason: (Net::HTTPServerException) HTTP 404 Object Not Found: Cannot find a cookbook named fst_app_chef_cluster with version 0.1.13
---- End output of "chef", "install", "chef_cluster.lock.json"] ----
Ran "chef", "install", "chef_cluster.lock.json"] returned 1
Build step 'Execute shell' marked build as failure
Set GitHub commit status (universal)] ERROR on repos GHRepository@338a8a22nodeId=MDEwOlJlcG9zaXRvcnkyODgzMTQ0MzQ=,description=<null>,homepage=,name=cfgmgmt-chef-repo,fork=false,archived=false,size=8,milestones={},language=Ruby,commits={},source=<null>,parent=<null>,responseHeaderFields={null=HTTP/1.1 200 OK], Access-Control-Allow-Origin=*], Access-Control-Expose-Headers=ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, Deprecation, Sunset], Cache-Control=private, max-age=60, s-maxage=60], Content-Encoding=gzip], Content-Security-Policy=default-src 'none'], Content-Type=application/json; charset=utf-8], Date=Tue, 18 Aug 2020 07:23:51 GMT], ETag=W/"1d5affdb6e8e2d43c11178a8850c849f"], Last-Modified=Tue, 18 Aug 2020 00:19:06 GMT], OkHttp-Received-Millis=1597735431201], OkHttp-Response-Source=CACHE 200], OkHttp-Selected-Protocol=http/1.1], OkHttp-Sent-Millis=1597735430928], Referrer-Policy=origin-when-cross-origin, strict-origin-when-cross-origin], Server=GitHub.com], Status=200 OK], Strict-Transport-Security=max-age=31536000; includeSubdomains; preload], Transfer-Encoding=chunked], Vary=Accept, Authorization, Cookie, X-GitHub-OTP, Accept-Encoding, Accept, X-Requested-With, Accept-Encoding], X-Accepted-OAuth-Scopes=repo], X-Content-Type-Options=nosniff], X-Frame-Options=deny], X-GitHub-Media-Type=github.v3; format=json], X-GitHub-Request-Id=8E96:48AD:47F6E2:5A8E25:5F3B8206], X-OAuth-Scopes=admin:repo_hook, repo], X-RateLimit-Limit=5000], X-RateLimit-Remaining=4999], X-RateLimit-Reset=1597739031], X-XSS-Protection=1; mode=block]},url=
https://api.github.com/repos/fastly-def/cfgmgmt-chef-repo,id=288314434]] (sha:df077a6) with context:jenkins
Setting commit status on GitHub for
https://github.com/fastly-def/cfgmgmt-chef-repo/commit/df077a65d79f5e35875bf7eda497a2d4bac8d246Slack Notifications] will send OnSingleFailureNotification because build matches and user preferences allow it
Finished: FAILURE
Collapse
5:28
see those 404s ?
5:29
it’s just trying to reinstall a lockfile
5:29
that I generated on my laptop just a few minutes earlier
5:30
chef_cluster_bastion.lock.json
{
"revision_id": "02f764c3f0657c240e6bcb2c55bac7a67f649922e37e598df5f961ef7f04ae4f",
"name": "chef_cluster_bastion",
"run_list":
"recipefst_base::default]",
"recipefst_app_monitoring::default]",
"recipefst_app_chef_cluster::default]",
"recipefst_app_chef_cluster::bastion]"
],
"named_run_lists": {
"bootstrap":
"recipefst_base::default]"
]
},
"included_policy_locks":
{
"name": "base",
"revision_id": "46cebf0fc59d9d14b6ada4fb9afa10aa53d14108c45abe1051e3dfbe1129b6a2",
"source_options": {
"path": "base.lock.json"
}
},
{
"name": "chef_cluster",
"revision_id": "4a43f8a3be8fed0bbe4758871c0def4d7e205c6494b974cfa1a5420f7cf44d2f",
"source_options": {
"path": "chef_cluster.lock.json"
}
}
],
"cookbook_locks": {
"apt": {
"version": "7.2.0",
"identifier": "842599d4fec784a59a6ac1a9c79fb48f000b1c0d",
"dotted_decimal_identifier": "37196039559497604.46613154463598495.198526274051085",
"cache_key": "apt-7.2.0",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "7.2.0"
}
},
"audit": {
"version": "9.4.0",
"identifier": "2a0f8cc6308f865bada61320184a71b74e769cba",
"dotted_decimal_identifier": "11839046316756870.25805151677716554.125032109350074",
"cache_key": "audit-9.4.0",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "9.4.0"
}
},
"aws": {
"version": "8.2.0",
"identifier": "d2c43af9209919c47e3a051a7f9067377031baa6",
"dotted_decimal_identifier": "59325502676048153.55307883094114192.113487803169446",
"cache_key": "aws-8.2.0",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "8.2.0"
}
},
"chef-client": {
"version": "11.5.0",
"identifier": "b2437a13eef749774828d032c2f1ff34e57582cd",
"dotted_decimal_identifier": "50176737453995849.33574862357447409.280602653065933",
"cache_key": "chef-client-11.5.0",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "11.5.0"
}
},
"chef-sugar": {
"version": "5.1.8",
"identifier": "f4d61f4b92dbfbc8f4699c736f6804d49a74d5d2",
"dotted_decimal_identifier": "68915324217646075.56563729775685480.5311170926034",
"cache_key": "chef-sugar-5.1.8",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "5.1.8"
}
},
"chef-vault": {
"version": "4.0.0",
"identifier": "7231ce39b04d459b8866331d827d9ca6bf8b5087",
"dotted_decimal_identifier": "32142909145894213.43778593915765373.172239992082567",
"cache_key": "chef-vault-4.0.0",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "4.0.0"
}
},
"chef_client_updater": {
"version": "3.8.4",
"identifier": "50f2a82ee8c9647a520baab8707af6d91e2d967b",
"dotted_decimal_identifier": "22784802292287844.34430157221032058.271412374640251",
"cache_key": "chef_client_updater-3.8.4",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "3.8.4"
}
},
"cron": {
"version": "6.2.2",
"identifier": "e9a77585c40648ef66b6d0eca3ba68a6a7d5049e",
"dotted_decimal_identifier": "65767792770811464.67385454809097146.115064989615262",
"cache_key": "cron-6.2.2",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "6.2.2"
}
},
"fst_app_chef_cluster": {
"version": "0.1.13",
"identifier": "525326c2455d89e582e72d5b8c1fc0b9970f3e75",
"dotted_decimal_identifier": "23172374023462281.64601699076770847.211903335841397",
"cache_key": "fst_app_chef_cluster-0.1.13",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.13"
}
},
"fst_app_monitoring": {
"version": "1.0.89",
"identifier": "a617eb01403221545dbbc5fd3a72e5b0d6e4461a",
"dotted_decimal_identifier": "46751144239706657.23746959105669746.252547682289178",
"cache_key": "fst_app_monitoring-1.0.89",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "1.0.89"
}
},
"fst_app_prometheus": {
"version": "0.1.39",
"identifier": "c98d81d72e40362cba2878078832c0595d559538",
"dotted_decimal_identifier": "56732059119271990.12589581950486578.211490050512184",
"cache_key": "fst_app_prometheus-0.1.39",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.39"
}
},
"fst_apt": {
"version": "1.0.24",
"identifier": "ad0ec88dda548b4fcf0944cc0daef96195e56092",
"dotted_decimal_identifier": "48711425507087499.22464161876020654.274197521981586",
"cache_key": "fst_apt-1.0.24",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "1.0.24"
}
},
"fst_base": {
"version": "1.0.149",
"identifier": "22599b8aaa003a4056f324e5253584e04c3e86b5",
"dotted_decimal_identifier": "9668673789362234.18110000805520693.146098886706869",
"cache_key": "fst_base-1.0.149",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "1.0.149"
}
},
"fst_lib_apt_exporter": {
"version": "0.1.4",
"identifier": "f762de64b18b8d39a39367b5fa880b902b05530f",
"dotted_decimal_identifier": "69633026559150989.16223927168006792.12713824965391",
"cache_key": "fst_lib_apt_exporter-0.1.4",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.4"
}
},
"fst_lib_automate_exporter": {
"version": "0.1.1",
"identifier": "1d8e49c4b3c63c31701b2c92f9d4f76300235319",
"dotted_decimal_identifier": "8319221808481852.13915535873079764.272004576138009",
"cache_key": "fst_lib_automate_exporter-0.1.1",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.1"
}
},
"fst_lib_awslogs": {
"version": "0.1.5",
"identifier": "a7dc05606bd2e79bb6b02128117ad15ae0f2d46c",
"dotted_decimal_identifier": "47248236761305831.43829488976925050.230188251272300",
"cache_key": "fst_lib_awslogs-0.1.5",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.5"
}
},
"fst_lib_blackbox_exporter": {
"version": "0.1.3",
"identifier": "5a747339eb7b74a9d07906c4d48a10d65776ea5c",
"dotted_decimal_identifier": "25460786145753972.47798489287283850.18512776456796",
"cache_key": "fst_lib_blackbox_exporter-0.1.3",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.3"
}
},
"fst_lib_bolt": {
"version": "0.1.4",
"identifier": "f3c722bd2df7acfa25e16bd4590d864e64e3b9b5",
"dotted_decimal_identifier": "68617371357411244.70410394284611853.147671258216885",
"cache_key": "fst_lib_bolt-0.1.4",
"origin": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"source_options": {
"chef_server": "
https://api.chef.secretcdn.net/organizations/mosdef_cookbooks",
"version": "0.1.4"
}
},
"fst_lib_capsule8": {
"version": "0.1.22",
"identif...
Collapse
This snippet was truncated for display; see it in full
Message Mathieu Sauve-Frankel
Tara Ray
28 Wanganella Street
Balgowlah, NSW 2093
Australia
nodes -> Chef Server
chef-client on the Chef Server?
talk to another Chef Server
or it can use chef-client -z
still need to schedule recurring run
currently
policyfile tarball of content extracted in a directory + attributes.json
run it as "chef-client -z -j attributes.json -c server.rb"
we need to add "managed_chef_server::cron" to the run list
and provide path to tarball and cron schedule
GNOC3(Chef Infra Server)
->Organizations(3)
Nucleus-G3
GICS-G3
SPKI-G3
GNOC5(Chef Infra Server)
->Organizations(4)
Nucleus-G5
GICS-G5
SPKI-G5
Agencies-G5
Nucleus-SIT(Chef Infra Server)
->Organizations()
Nucleus-SIT
Nucleus-Build(Chef Infra Server)
->Organizations()
Nucleus-Build
Takeaways:
Nexus -> CI/CD output maps to a directory for that organization->artifact type (ie. "nucleus-sit-policies")
Chef Server ->
nexus_sync recipe reads Nexus directory (url="
https://nexus.sg:8443/nucleus-sit-policies") dir (/opt/chef/nexus_sync/nucelus-sit-policies)
need to write:
also pulls .lock.json out of each .tgz
make tmp dir
extract archive into tmp dir
https://docs.chef.io/resources/archive_file/move .lock.json into directory
delete tmp dir
/opt/chef/nexus_sync/nucelus-sit-policies/sql-server-23y0r28ewhfv02hwf.tgz
/opt/chef/nexus_sync/nucelus-sit-policies/sql-server-23y0r28ewhfv02hwf.lock.json
wrapper cookbook
metadata.rb
depends 'managed_chef_server'
policyfile_loader
org = 'nucleus-sit'
dir = /opt/chef/nexus_sync/nucelus-sit-policies
'Nucleus-G3', 'GICS-G3', 'SPKI-G3'].each do |org|
managed_organization
policyfile_loader node'mcs']'policyfile']'dir']+'/'+org do
organization org
end
end
managed_organization 'gnoc3-sit create managed Chef server organization and user' do
organization 'gnoc3-sit'
full_name 'gnoc3-sit'
email
gnoc...@ncs.gov.sg password
gnoc...@ncs.gov.sgend
policyfile_loader '/opt/chef/gnoc3-sit' do
organization 'gnoc3-sit'
end
managed_organization 'gnoc3-uat create managed Chef server organization and user' do
organization 'gnoc3-uat'
full_name 'gnoc3-uat'
email
gnoc...@ncs.gov.sg password
gnoc...@ncs.gov.sgend
policyfile_loader '/opt/chef/gnoc3-uat' do
organization 'gnoc3-uat'
end