Oathkeeper rule management

Hi,

What the recommended way to keep Oathkeeper rules up to date?
I see from the docs that Oathkeeper reads rules from a file or an https url, but is there any logic implemented around detecting changes to these sources?

Does Oathkeeper poll the given https url for updates to the ruleset, or watch the file source for changes and then reload it automatically?

If not, how do people manage updates to their rulesets?

Yes, the filesystem does watch for changes. HTTPS does not at the moment.

Does this work with EFS mount across nodes in a K8s cluster - we have oathkeeper running in multiple pods/nodes across a cluser with configs located in EFS mount - the rule changes are not detected on all nods - funnily enough only one of the nodes triggers a reload - even when the EFS mount is updated on all nodes.

What is EFS?

Hi,

We are deploying into K8s inside ec2 instances in AWS and therefore are using https://aws.amazon.com/efs/ (cloud NFS) for our pods to allow the config/rules etc to be persisted across the cluster.

We are running 2 instances and noticed that if we modify the rules in the nfs mount directly or from one of the nodes it does not trigger a reload, however on one of the oathkeeper pods it always detects changes. I assume its something related to the way AWS EFS updates the file modification dates or the way oathkeeper is notified about change events on the file system.

Yeah Kubernetes has an “AtomicWriter” that uses symbolic links to update files. This can be quite tricky to work with but I think it should be working properly. We were actually working on a k8s-able filewatcher outside of viper (our config management) the last few days.

I’m not sure what EKS is doing differently there. Are you using config maps or are you mounting the volume manually and updating the files on the volume manually as well?

One possible option is that fsnotify/inotify don’t properly work with network volumes but it’s a guessing game for me honestly, as I never use AWS. My recommendation would be to check for fsnotify/inofitfy issues in relation to NFS/EFS/EKS.

Since you said that one node detects the change and the other doesn’t, giving more detail here (and possibly detecting a pattern - e.g. the first instance always reloads, the second doesn’t) would be of tremendous help to get a gut feeling of what’s going on.

Hi thanks for the feedback, we have to focus on some demo we are preparing now, so i can’t put much time in this week, but I will keep this on my task list to analyze further and get back to you.

We were doing a manual volume mount from all nodes into EFS, not config maps, but we might also try some other storage option like https://github.com/longhorn/longhorn.

I will get back you on this thread when we get time to gather more info on EFS and fsnotify/inotify issues.

I recommend using the official helm charts, that use config maps and make all of this much easier for you. You can find them here: https://k8s.ory.sh/helm/

Thanks for that info, we did install with Helm and like you say, much easier to install and the config maps are easier to manipulate.

Ok so I assume that this then resolved your problems? :slight_smile:

Not exactly - now Im installing with the same config Ive been testing locally in docker but for some reason its not allowing my config for introspection;

msg=“The configuration is invalid and could not be loaded.” [config_key=authenticators.oauth2_introspection.config]=“doesn’t validate with \“#/definitions/configAuthenticatorsOauth2Introspection\“” [config_key=authenticators.oauth2_introspection.enabled]=“value must be false” [config_key=authenticators.oauth2_introspection]=“oneOf failed” config_file=/etc/config/config.yaml

For some reason it wants me to disable oauth2_introspection

The config is simple and works on Docker locally

  oauth2_introspection:
    enabled: true

    config:
      introspection_url: http://x.x.x.x/auth/realms/Developer/protocol/openid-connect/token/introspect
      scope_strategy: exact
#      required_scope:
#        - email
#        - profile
      pre_authorization:
        enabled: false
#        client_id: 
#        client_secret: 
#        scope:
#          - email
        token_url: http://x.x.x.x/auth/realms/Developer/protocol/openid-connect/token
      token_from:
        header: Authorization
        # or
        # query_parameter: auth-token
        # or
        # cookie: auth-token
      introspection_request_headers:
        x-forwarded-proto: http

      cache:
        enabled: true
        ttl: 60s

We’re trying to figure out now why this is not valid if it works locally.

figured this out , the json schema mentions these fields as required for pre_authorization.

         "required": [
            "client_id",
            "client_secret",
            "token_url"
          ],

Im running with this setting, so did not set those mandatory fields.

      pre_authorization:
           enabled: false

Seems to work locally but in K8s under helm I had to add these fields and put in dummy values.

Actually - this is not a working fix - even with this pre_authorization the problem still exists – any ideas ?