Creatinga dashboard, things are working well, except im having issues with a data extractor and converter I have created. One of my syslog inputs is receiving data from a network device. All im trying to do is create a bytes received/bytes sent graph.
I have a extractor set up using regular expression, on the source field rcvdbyte using ([0-9]+) called Bytes_Received. There is a numeric value converter on it. I also have a second extractor on source field sentbyte using ([0-9]+) called Bytes_Sent. This also has a numeric value converter.
In the dashboard I have created a area chart. Under the metrics I can add Bytes_Received as a Sum (over time as a row) and this works without issue. However if I try to use Bytes_Sent as a Sum (or any mathematical function on it), it fails with a error:
The graylog server log doesnt give me any additional info (since this appears to be a type issue from elastic that makes sense though), im just not sure where to go from here as im new to this area. Elasticsearch logs dont tell me anything either, so im stuck with the error I have.
My best guess based on what I have found is this has something to do with the indices and I may have to go through some of that but I wasnt sure yet and again hoping someone can put me on the right track before I go down the wrong one. (or help me figure out how to get more info)
I've found many times the image-package-extractor is unable to extract packages from some images I'm running because the daemon set creates the pods with 50MB of memory request, and the same ammount for it's limits.
kubectl edit ds image-package-extractor --namespace=kube-system
error: daemonsets.apps "image-package-extractor" could not be patched: daemonsets.apps "image-package-extractor" is forbidden: User "jk" cannot patch resource "daemonsets" in API group "apps" in the namespace "kube-system": GKE Warden authz [denied by managed-namespaces-limitation]: the namespace "kube-system" is managed and the request's verb "patch" is denied
kubectl replace -f /tmp/kubectl-edit-751860510.yaml
Error from server (Forbidden): error when replacing "/tmp/kubectl-edit-751860510.yaml": daemonsets.apps "image-package-extractor" is forbidden: User "jk" cannot update resource "daemonsets" in API group "apps" in the namespace "kube-system": GKE Warden authz [denied by managed-namespaces-limitation]: the namespace "kube-system" is managed and the request's verb "update" is denied
I created a field extractor for different fields for an event. Now I would like to search all the events from a source and apply that field extractor to see the fields that I'm interested in. The field extractor seems to appear, but I don't know how to apply that into my search. I've tried REPORT- but no luck. How can I apply a field extractor already created into a search ?
Now, in 2024, 7 years later it is still not very clear how one applies a saved extraction regex to an existing search to extract fields from the search. Especially without access to the various server side configuration files. Splunk has grown long in the tooth, dementia encroaching.
Reality: You probably can't do it simply.
If you have a sourcetype X. The extractors you saved will only run against the base, plain data set sent as X, not against your search, and they run against the base sourcetype automatically. If it was going to work, it would already be working and you would already have your field.
Now, if your search does any kind of transformations like for example pulling log fields out of JSON data using spath, messing around with _raw or similar, the extractor you created isn't going to run against that resulting data set. I know, I've tried. The extractors get applied before that part of the search runs.
You're going to have to go into Settings -> Fields -> Field Extractions and copy/paste the regex created by the web extractor page into your search and manually extract the field within your search using the "rex" command. You may have to tweak it slightly especially for quotes.
It's a little disingenuous of the splunk web extraction generator to take the results of the current search as the input and imply that a saved extractor will actually operate against such a search and pull fields out for you. It doesn't.
Not a useful answer. The question concerned a field extractor, not a transform. Are you implying that the ONLY way Splunk can use a field-extractor is to first create a transform? Pity, since that seems beyond the scope of an ordinary user.
it cannot be done. Splunk is stupid and non-intuitive or maybe they want to sell professional services like ITRS does,it cannot be done the way you want it. you have to plunk down the regex in its entirety.
Splunk is stupid and non-intuitive or maybe they want to sell professional services like ITRS does
Each field extraction is applied to a sourcetype generally. The extractions are only going to work on the sourcetypes they've been setup for, and only in the apps they've been configured in (unless the extraction is set to global sharing), and only for those users who have read permission on the app they are found in.
I have the same issue here. And I cannot access the transforms.conf file (or the server's file system at all) to get the Stanza of my field extractor.
In the Splunk Web-UI in the field extractions overview, the name of my field extractor is like my_sourcetype : EXTRACT-my_new_field.
Is there any other way to derive the Stanza through the Splunk Web-UI?
since you have an EXTRACT option configured there is no transforms.conf stanza linked.
An example for a REPORT option is the default field extraction of splunk_web_access which you can see using this URI:
Your answer from 2020 was very unclear, less clear than the documentation. OK, so here goes: Splunk provides a fascinating way to search and report on log data, and promises simplicity in various use-cases. One (would think) extremely common use-case is for users in the enterprise edition to create custom regular expressions in order to extract values from select log lines, and then do various things with those extracted values.
The documentation and GUI lead one to think one can create a python-perl extended regex to extract such fields. However, instead of then being able to _use_ such a regex, the user must _save_ it somehow with a name. And then the documentation goes off in the weeds without any explanation as to how to _use_ such saved extractions.
There's lots of discussion about props.conf and transforms.conf, but this appears to predate the enterprise edition, in which ordinary users do not have such godlike powers over a centralized, enteprise splunk deployment.
So as simply as possible, please tell me what additional steps an ordinary user within an Splunk enterprise deploymnet must take in order to create searches and then later reports and alerts using saved field-extractions.
it keeps getting me these error:
Error in 'extract' command: Failed to parse the key-value pair configuration for transform 'MYFIELD'.
Do you possibly have in mind what it could be ? I'm kinda trapped on it for a few days
I'm even more late to the party, but am running in somewhat of a similar situation. I have new data coming in via syslog, but no fields are auto extracted. So, I'm using REPORT to extract them. I have the stanza ready, but I placed it in the Heavy forwarder by mistake. Should I place it in the props on the search head or the Indexer for the change to work.
If you own a BG10 extractor and are looking to replace any parts, get more cleaning solution, or buy attachments like the upholstery attachment, we can help. We sell replacement parts, attachments, cleaning solutions and other cleaning equipment. Give us a call or shop through our accessories and parts to purchase parts online.
The BG10 extractor has many benefits. It is easy to maneuver with its forward & backward movement capabilities & up to 50 feet of cleaning distance, and it provides a powerful solution to deep cleaning both carpets and upholstered furniture with the optional upholstery tool.
The BG10 Extractor is one of the most effective ways to clean carpets and upholstered furniture. In this video we cover how to use your BG10 extractor, if you have any additional questions you can read the instructions below or give our team a call.
Before you get started with the BG10 you need to prepare the area. First and foremost you will want to quickly vacuum the area to get rid of any loose dirt, debris, or other items that are on top of the surface of the carpet.
Remove the dirty water tank that sits on top of the clean water tank. Then take the clean water tank to your sink and fill it with warm tap water to the water line indicated on the tank. Then add two capfuls of Bissell Commercial Cleaning solution. Once complete, replace the tanks so that the dirty water tank is properly seated on top of the clean water tank.
Turn on the machine, recline the handle and press the trigger to start cleaning. Make slow forward and backward passes as you clean your carpet. Then release the trigger and make one slow forward and backward pass over the same area to suction up dirty water. Avoid over saturating and single area.
The flow indicator will stop spinning when it is time to refill the clean water tank. At this point you should empty the dirty water tank and replace the water and cleaning solution in the clean water tank following the same steps from earlier. When you finish cleaning, make sure to empty and rinse out both the clean and dirty water containers. Make sure to check the brush roll as well.
Watch this video to learn more about how to use the upholstery accessory for your BG10 extractor. This is a great option for cleaning stairs, furniture and other upholstered items at your business. For more information make sure to give us a call or contact us through our website.
Press on the red trigger and clean with a back and forth motion to cover a section of the furniture you are cleaning. Like when using the extractor, release the trigger and continue passing over the space to suction up dirty water. Like with the regular extractor you should not over saturate any single area.
3a8082e126