![]() Iv) If the lookup is defined using nf like shown below, these lookups are defined as Global and will be required on the indexers. Iii) In addition, if the lookups are used only for the Dashboard drop-down (selection), these lookups do not need to be sent to the indexer. So, the 1st example should actually be: index=test | stats count by clientip, domain | lookup local=true domain2datacenter domain OUTPUT datacenter You need local=true if you want the indexers not to attempt to run the lookup. In the 1st example, that is not the case. Clearly, my indexers are going to need access to the lookup in order to run that stats. In the 2nd example, I use a field produced by the lookup ("datacenter") in my first reporting command. This typically happens with the first reporting command, so what matters is: do I need the lookup before or after the 1st reporting command? That is the determining factor for needing the lookup on the indexers or not. Note: The stats count is the point at which map/reduce happens and sends that to the search head. Ii) Here's an example where the lookup is needed on the indexers: index=test | lookup domain2datacenter domain OUTPUT datacenter | stats count by clientip, datacenter index=test | stats count by clientip, domain | lookup domain2datacenter domain OUTPUT datacenter For example, in this scenario, the lookup is only needed on the SH. I) The lookup is only needed on the Search Head when output fields from the lookup tables are always required post-reporting. ![]() We used the following guideline to determine the lookups that can be filtered. When is the lookup required on the Search Head verses indexer? What is the recommendation on filtering the lookup on search Head? One of the options we have is to filter out lookup using: We noticed that bundle had many lookup files and some as Big as 100MB. Next we checked the content of the bundle on the search head using: $SPLUNk_HOME_SEARCH_HEAD/var/run tar -tvf sh604-1409261525.bundle If an app contains large binaries or CSV that do not need to be shared with the peers, you can eliminate them from the bundle and thus reduce the bundle size. The process of distributing knowledge bundles means that peers by default receive nearly the entire contents of the search head's apps. They have access only to the objects in the search head's knowledge bundle.īundles typically contain a subset of files (configuration files and assets) from $SPLUNK_HOME/etc/system, $SPLUNK_HOME/etc/apps and $SPLUNK_HOME/etc/users When executing a distributed search, the peers are ignorant of any local knowledge objects. The search peers use the search head's knowledge bundle to execute queries on its behalf. The knowledge bundle gets distributed to the $SPLUNK_HOME/var/run/searchpeers directory on each search peer. ![]() They are tar files, so you can run tar tvf against them to see the contents. On the search head, the knowledge bundles reside under the $SPLUNK_HOME/var/run directory. We have taken the following steps to debug this situation.īased on the above error, the search bundle size is 800+MB and as a result, bundles are not getting downloaded to the indexers, causing searches to fail.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |