Check Vcpu In Linux

0 views
Skip to first unread message

Rodney Liuzzo

unread,
Aug 5, 2024, 1:46:11 AM8/5/24
to tardsellparcui
KVMsupports an internal API enabling threads to request a VCPU thread toperform some activity. For example, a thread may request a VCPU to flushits TLB with a VCPU request. The API consists of the following functions:

Typically a requester wants the VCPU to perform the activity as soonas possible after making the request. This means most requests(kvm_make_request() calls) are followed by a call to kvm_vcpu_kick(),and kvm_make_all_cpus_request() has the kicking of all VCPUs builtinto it.


The goal of a VCPU kick is to bring a VCPU thread out of guest mode inorder to perform some KVM maintenance. To do so, an IPI is sent, forcinga guest mode exit. However, a VCPU thread may not be in guest mode at thetime of the kick. Therefore, depending on the mode and state of the VCPUthread, there are two other actions a kick may take. All three actionsare listed below:


However, VCPU request users should refrain from doing so, as it wouldbreak the abstraction. The first 8 bits are reserved for architectureindependent requests, all additional bits are available for architecturedependent requests.


This solution also requires memory barriers to be placed carefully in boththe requesting thread and the receiving VCPU. With the memory barriers wecan exclude the possibility of a VCPU thread observing!kvm_request_pending() on its last check and then not receiving an IPI forthe next request made of it, even if the request is made immediately afterthe check. This is done by way of the Dekker memory barrier pattern(scenario 10 of [lwn-mb]). As the Dekker pattern requires two variables,this solution pairs vcpu->mode with vcpu->requests. Substitutingthem into the pattern gives:


As only one IPI is needed to get a VCPU to check for any/all requests,then they may be coalesced. This is easily done by having the first IPIsending kick also change the VCPU mode to something !IN_GUEST_MODE. Thetransitional state, EXITING_GUEST_MODE, is used for this purpose.


Some requests, those with the KVM_REQUEST_WAIT flag set, require IPIs tobe sent, and the acknowledgements to be waited upon, even when the targetVCPU threads are in modes other than IN_GUEST_MODE. For example, one caseis when a target VCPU thread is in READING_SHADOW_PAGE_TABLES mode, whichis set after disabling interrupts. To support these cases, theKVM_REQUEST_WAIT flag changes the condition for sending an IPI fromchecking that the VCPU is IN_GUEST_MODE to checking that it is notOUTSIDE_GUEST_MODE.


VCPU threads may need to consider requests before and/or after callingfunctions that may put them to sleep, e.g. kvm_vcpu_block(). Whether theydo or not, and, if they do, which requests need consideration, isarchitecture dependent. kvm_vcpu_block() calls kvm_arch_vcpu_runnable()to check if it should awaken. One reason to do so is to providearchitectures a function where requests may be checked if necessary.


Would someone be willing to sanity check my libvirt xml file? Trying to do some cpu tuning but it wont let me save this config. error: internal error: CPU IDs in exceed the count

To my eyes everything here looks correct. There are 16 vcpus and 16 numa CPU ids.


So my advice if you are running in a virtualized enviroment play with the number of vcpu's to get the optimal peformance. I know the say minimum 16 vCpu but if you have high ready times it is worth to try.


You'd want to look at the CPU usage graphs and see what's causing the hogging of CPU utilisation. If it says Search, that means you have to look at people running resource intensive searches in your environment. For that, you'd want to go to Search -> Search Activity: Instance and check everything out from there.


You can always create alerts off the audit data to find violators, who are running long running searches, or too many searches etc. If it's not a search issue, please contact Splunk support, as it maybe a case of memory leakage.


FortiSIEM Linux Agent communicates outbound via HTTPS with Supervisor and Collectors. The Agent registers to the Supervisor and periodically receives monitoring template updates, if any. The events are forwarded to the Collectors.


FortiSIEM Linux Agent is available as a Linux installation script: fortisiem-linux-agent-installer-6.7.8.1757.sh from the Fortinet Support website See "Downloading FortiSIEM Products" for more information on downloading products from the support website.


For Enterprise installations, Organization ID is "1", Organization Name is "Super", and Agent user name and password are defined in the CMDB > User page. You must create a user and check Agent Admin. See here for details.


In typical installations, FortiSIEM Agents register to the Supervisor node, but send the events by using the Collector. In many MSSP situations, customers do not want Agents to directly communicate with the Supervisor node. This requirement can be satisfied by setting up the Collector as an HTTPS proxy between the Agent and the Supervisor. This section describes the required configurations.


When FSM Linux agent is installed on a Linux machine, the agent also requires the installation of auditd process, and configuration of auditd to monitor audit activity on the machine. The auditd process can generate logs in /var/log/messages, which can grow quickly, potentially filling up the disk in the root (/) partition. Linux systems have log rotating policies to rotate /var/log/messages. However, these policies are not aggressive enough to prevent the disk from getting full. It is necessary to add a new log rotate configuration to aggressively rotate /var/log/messages every 30 minutes to prevent the disk from becoming full. Follow the steps below to add this new log rotate configuration.


The number of vCPUs reserved for the container. For jobs that run on Amazon EC2 resources, you can specify the vCPU requirement for the job using resourceRequirements, but you can't specify the vCPU requirements in both the vcpus and resourceRequirements object. This parameter maps to CpuShares in the Create a container section of the Docker Remote API and the --cpu-shares option to docker run. Each vCPU is equivalent to 1,024 CPU shares. You must specify at least one vCPU. This is required but can be specified in several places. It must be specified for each node at least once.


For jobs running on Amazon EC2 resources that didn't specify memory requirements using resourceRequirements, the number of MiB of memory reserved for the job. For other jobs, including all run on Fargate resources, see resourceRequirements.


This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasEnvironment() method.


This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasMountPoints() method.


When this parameter is true, the container is given read-only access to its root file system. This parameter maps to ReadonlyRootfs in the Create a container section of the Docker Remote API and the --read-only option to docker run .


The name of the Amazon CloudWatch Logs log stream that's associated with the container. The log group for Batch jobs is /aws/batch/job. Each container attempt receives a log stream name when they reach the RUNNING status.


This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasNetworkInterfaces() method.


This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasResourceRequirements() method.


This parameter maps to LogConfig in the Create a container section of the Docker Remote API and the --log-driver option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance. Or, alternatively, it must be configured on a different log server for remote logging options. For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.

3a8082e126
Reply all
Reply to author
Forward
0 new messages