Splunk will automatically generate an internal server certificate on first run. This certificate is used by default for SSL secured communication from splunkd over port 8089. It is also used as part of the mongdb initialisation on the KVStore.
This certificate is auto generated on first run, and has an expiry date two years into the future.
Now, under normal use this isn’t an issue, as upgrades to the Splunk platform will ensure to renew the certificate as required (another reason to keep things up to date)
In the event it expires (I’ve seen this happen multiple times due to environments sitting on the same version for two years) you will see very odd behaviour from Splunk. Most noticeably, the KVStore will refuse to start using the expired certificate. Now, it will happily continue to run under the expired certificate, so everything is peachy until the instance restarts, at which point mongodb will exit on start due to said expired cert…
How do I fix this?
Actually, it’s quite simple. To force Splunk to re-issue the certificate, simply move the current server.pem out of $SPLUNK_HOME/etc/auth and restart Splunk.
A new certificate will be generated to replace the missing one, with a new expiry date two years into the future.
Splunk doesn’t make it straightforward to change the language the Splunk UI presents back to you. It tries to be overly helpful by being responsive to the language of your web browser. This is fine, but sometimes for testing, and for corporate machine build reasons, you might not want to see Splunk in the same language / region setting as your OS.
Your web browser is sending language information to Splunk as part of all requests within the Accept-Language field of the Header. For many of us using standard machine builds, this can end up being en-US, which is fine, unless like me you really don’t like timestamps displayed with AM / PM format (it’s a tiny niggle, I know…). I ensure that my browser sends an Accept-Language key of en-GB which will cause Splunk to display all time fields in 24 hour time. It just makes life simpler when dealing with time.
If you’re using a non-english language version of your OS, rest assured - this should ensure Splunk returns to you the language that you expect. This of course assumes that the language is supported.
You can also use various browser extensions to manipulate the Accept-Language header and use Splunk in an alternate language. Great for testing localisation needs.
Quite often, especially when using Splunk Enterprise Security, we need to have a dynamic lookup between IP Addresses seen in events, and hostname values. This is useful for the user looking at the event data, but also allows events from sources such as network traffic to be associated with events from sources such as Windows event log.
How can we get Splunk to do this association automatically at search time?
The answer is time based lookups.
The lookup file is very, very simple :
dhcp_time
dhcp_hostname
dhcp_ip
timestamp
hostname
ip
Time based lookups utilise the time of an event, to perform a lookup against a set of values with associated timestamps. Splunk will retrieve the closest match between the search time event and the lookup file. There is a maximum skew between the two times that can be implemented, so for example, we can have Splunk retrieve the closest matching value within the lookup file, that occurred within 1 hour of the event in question.
These can be configured using a standard props / transforms rule on your Splunk Search Head.
props.conf
1 2
[WinEventLog:Microsoft-Windows-Sysmon/Operational] LOOKUP-dhcp_lookup_auto = dhcp_time_lookup dhcp_hostname AS host OUTPUTNEW dhcp_ip AS dhcp_ip
The result of this is an automatic lookup that runs on the WinEventLog:Microsoft-Windows-Sysmon/Operational sourcetype matching against a single entry of the lookup file that has a timestamp field which is no more than one hour behind our search time event. This offset allows us to have multiple entries in this lookup over time, which means that as new IP’s are assigned, events should match up against the most appropriate single entry based on time.
The lookup file itself can be automatically populated by Splunk using SPL and a DCHP dataset, such as one generated by the Windows DHCP service.
Remember - if you want to use these globally, all knowledge objects must be shared globally! That includes the lookup file AND the lookup defintion. If you’re using Splunk Enteprise Security, consider using the Managed Lookups feature as this will handle permissions for you nicely.
Therefore, what we end up with is a automatic lookup that will always enrich network data with a DHCP hostname, but also events such as Windows Event log can be enriched with an an accurate and dynamic assigned IP address.