I am new to DeltaV and I got shocked with the limitation of the watchit utility. during testing/debugging it is very useful if the engineer has a simple user friendly list that display all parameters that he cares about. with easily add/remove parameters to it.
I can develop a standalone tool using VB dot Net. which interface with OPCDAAutomation Active X interface. but there is no way to get the my client agrees to use non Standard Emerson tool on their systems.
Why only limited to 6 watchits? I have had many more than this and positioned these so you just see the values during testing to see the values changed and I group them logically like SP/MODE.Target of a valve together which I've shown example below
There's no substitute for understanding what you're doing, and that in turn is difficult without seeing what is happening. Debugging Keycloak OIDC problems without understanding what is happening under the hood is no exception to this rule. The purpose of this article is twofold:
For the purposes of this blog post I've been using the OpenID Connect Playground application from the book Keycloak - Identity and Access Management for Modern Applications from Packt Publishing. I can recommend that book for anyone who needs to understand Keycloak and in particular the protocols it supports (OAuth2, OIDC and SAML 2.0).
If traffic between Keycloak and the client (e.g. a web application) is not encrypted, debugging Keycloak OIDC token exchanges is easy to do with tcpdump. Here's a sample tcpdump command-line that ran on the computer running the web browser:
Below you'll see the exchange between the application and Keycloak that use Authorization Code flow, which is essentially the same as Authorization Code grant type in OAuth2. Her the Keycloak server lives at :8080. I recorded the flow below from the computer running the browser. To get the full token exchange you also have to record from the Keycloak side.
All the tokens are JSON Web Tokens (JWT) and consist of three dot-separated parts. The first part is a base64-encoded header and the second part is the base64-encoded payload. The third part is the signature. You can simply copy the strings from tcpdump, split at "." and base64 decode the segments to get the JSON-formatted data, as shown below. In JWT tokens all times are in the Unix epoch time. RFC 7519 documents the fields present in these tokens so I won't go through them here.
Much of the data in the ID token is derived from the Keycloak user you authenticated as. For example, if your user has an email address, first name and last name, you will see additional fields like these:
If you're only interested in what the browser sees you can use the SAML, WS-Federation and OAuth 2.0 tracer extension to Google Chrome. I suggest you to turn on verbosity to the maximum in options. Once you start recording debugging Keycloak OIDC token exchange is just a matter of checking the messages in the traces. That is way easier than parsing the raw tcpdump data. Also, if traffic between Keycloak and browser is end-to-end encrypted with HTTPS then tcpdump might not be an option at all.
Firefox DevTools, which are built-in in to Firefox, allow you to view HTTP requests and payloads. Press Ctrl-Shift-E to go straight to the Network section where you can analyze what the browser sees during token exchanges.
Keycloak provides tools for evaluating the tokens granted for clients (e.g. web applications). Go to the Keycloak client of your application, then select "Client Scopes", select a "User" to impersonate and click on "Evaluate". Several interesting tabs now appear:
As you can see, you can debug token contents directly from Keycloak, without you having to trigger a real OpenID Connect token exchange and debug the tokens that way. Similarly, you can check if there are any inconsistencies with the generated ID token and User Info, should your application prefer getting its user information from one over the other.
OpenID Connect Playground is a very simple Node.js application with the sole purpose of making the OIDC token exchange visible. In OpenID Connect terms it is a Relying Party (RP) that uses the Authorization Code flow. It is a quite useful tool for learning what happens under the hood in OpenID Connect, but a debug tool it is not. However, it is a good tool for understanding how token exchange should work, so that you can more easily spot anomalies in real life.
The OpenID Connect Playground is useless by itself, because it relies on Keycloak. For testing purposes we tend to use the Vagrant + Virtualbox environment in puppet-module-keycloak. However, running Keycloak locally or inside a container will work equally well.
With the client in place you should be able to use the application. In the Discovery section set the Issuer to your Keycloak realm's URL, for example :8080/auth/realms/master. Then click "Load OpenID Provider Configuration" and you should see a bunch of JSON data that Keycloak published about itself to the application.
RabbitMQ versions prior to 3.9.0 would always log to a file unless explicitly configurednot to do so. With later versions, the same behavior can be achieved by explicitlylisting a file output next to other desired log outputs, such as the standard stream one.
There are two ways to configure log file location. One is the configuration file. This option is recommended.The other is the RABBITMQ_LOGS environment variable. It can be useful in development environments.
The environment variable takes precedence over the configuration file. When in doubt, consideroverriding log file location via the config file. As a consequence of the environment variable precedence,if the environment variable is set, the configuration key log.file will not have any effect.
Logging to a file is one of the most common options for RabbitMQ installations. In modern releases,RabbitMQ nodes only log to a file if explicitly configured to do so usingthe configuration keys listed below:
RabbitMQ nodes always append to the log files, so a complete log history is preserved.Log file rotation is not performed by default. Debian and RPM packages will set uplog rotation via logrotate after package installation.
Logging to standard streams (console) is another popular option for RabbitMQ installations,in particular when RabbitMQ nodes are deployed in containers.RabbitMQ nodes only log to standard streams if explicitly configured to do so.
Syslog metadata identity and facility values also can be configured.By default identity will be set to the name part of the node name (for example, rabbitmq in rabbitmq@hostname)and facility will be set to daemon.
By default each category will not filter by level. If an is output configured to log debugmessages, the debug messages will be printed for all categories. Configure a log level for acategory to override.
Logging verbosity can be controlled on multiple layers by setting loglevels for categories and outputs. More verbose log levels willinclude more log messages from debug being the most verbose tonone being the least.
The rabbitmq-diagnostics log_tail_stream command can only be used against a running RabbitMQ nodeand will fail if the node is not running or the RabbitMQ application on itwas stopped using rabbitmqctl stop_app.
When debug logging is enabled, the node will log a lot of informationthat can be useful for troubleshooting. This log severity is meant to beused when troubleshooting, say, the peer discovery activity.
A client connection can be closed cleanly or abnormally. In theformer case the client closes AMQP 0-9-1 (or 1.0, or STOMP, orMQTT) connection gracefully using a dedicated library function(method). In the latter case the client closes TCP connectionor TCP connection fails. RabbitMQ will log both cases.
Abruptly closed connections can be harmless. For example, a short lived program can naturally stopand don't have a chance to close its connection. They can also hint at a genuine issue such asa failed application process or a proxy that closes TCP connections it considers to be idle.
RabbitMQ nodes have an internal mechanism. Some of its events can be of interest for monitoring,audit and troubleshooting purposes. They can be consumed as JSON objects using a rabbitmq-diagnostics command:
Application that would like to consume log entries need to declare a queueand bind it to the exchange, using a routing key to filter a specific log level,or # to consume all log entries allowed by the configured log level.
When a client connects to an SSH server, the server starts the SSH protocol by sending a server version string in plain text to the client. With the OpenSSH ssh utility, the relevant debug lines look like this:
After the "local version" line, your client is waiting for the server to send its version string to the client. If the connection hangs here, it's because the client hasn't received the version string from the server.
In your case, you're connecting to port 22 so it's safe to assume you're connecting to an SSH server process. It seems likely you're suffering from #2 (the server is malfunctioning), but it's not possible to say exactly what is wrong beyond that. You would need to get into the server and figure out what was happening at the time which prevented it from processing SSH connections.
I am wondering what could be the interest of generating Packed Library within the TestStand Deployment Tool since the sequences using these VIs will not be able to find the dependant VIs included into the build Packed Library on the deployment target.
If your VIs are calling nonLabVIEW dependencies then you need to include those in the same folder as the packed project library. For instance we were using some XML parsing code modules that were built on top of some dlls. Until we put them in the same folder as the packed project library it wouldn't work. I don't know why the PPL couldn't pull them into itself. But it doesn't with the TS deployment utility.
A custom step type should be a standalone entity. In other words, a custom step type's substeps and dependencies should all be packaged up into a nice neat llb or dll before developing with it. It would all be packaged up and put in the TestStand public folders in Components>>StepTypes folder. Then you just deploy the public folder contents.
c80f0f1006