Influxd uses 300% CPU constantly

Recently my influxd 2.0 process has been consuming all my CPU cycles. I have 8GB RAM, 4 cores on a Linode VM. I have about 2 year’s worth of various data in my system, unfortunately I have no idea to give you statistics on sizes / shards / cardinality as all the instructions seem to be for influxdb 1.x and not 2.0. Here are some profile statistics I gathered:

File: influxd
Type: cpu
Time: Sep 28, 2021 at 8:15pm (MDT)
Duration: 60.14s, Total samples = 169.57s (281.97%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) text
Showing nodes accounting for 161.46s, 95.22% of 169.57s total
Dropped 278 nodes (cum <= 0.85s)
Showing top 10 nodes out of 36
      flat  flat%   sum%        cum   cum%
   149.82s 88.35% 88.35%    149.82s 88.35%  runtime.memmove
     5.74s  3.39% 91.74%      6.92s  4.08%  runtime.findObject
     1.60s  0.94% 92.68%      3.09s  1.82%  runtime.scanobject
     1.52s   0.9% 93.58%      8.58s  5.06%  runtime.wbBufFlush1
     1.17s  0.69% 94.27%      1.41s  0.83%  github.com/influxdata/flux/internal/execute/groupkey.groupKeyLess
     0.62s  0.37% 94.63%      0.90s  0.53%  runtime.spanOf (inline)
     0.44s  0.26% 94.89%      3.04s  1.79%  runtime.mallocgc
     0.25s  0.15% 95.04%      9.82s  5.79%  runtime.bulkBarrierPreWrite
     0.17s   0.1% 95.14%      2.33s  1.37%  github.com/influxdata/flux/execute.(*ColListTableBuilder).AddCol
     0.13s 0.077% 95.22%      3.34s  1.97%  runtime.gcDrain
File: influxd
Type: inuse_space
Time: Sep 28, 2021 at 8:16pm (MDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) text
Showing nodes accounting for 986.84MB, 91.33% of 1080.49MB total
Dropped 1101 nodes (cum <= 5.40MB)
Showing top 10 nodes out of 70
      flat  flat%   sum%        cum   cum%
  594.85MB 55.05% 55.05%   594.85MB 55.05%  github.com/influxdata/flux/execute.(*ColListTableBuilder).AddCol
   69.19MB  6.40% 61.46%   186.70MB 17.28%  github.com/influxdata/flux/stdlib/universe.(*fixedWindowTransformation).newWindowGroupKey
   63.25MB  5.85% 67.31%    63.25MB  5.85%  github.com/influxdata/influxdb/v2/pkg/estimator/hll.(*Plus).toNormal
   61.14MB  5.66% 72.97%    61.14MB  5.66%  github.com/influxdata/flux/values.NewTime (inline)
   56.34MB  5.21% 78.18%    56.37MB  5.22%  github.com/influxdata/flux/internal/execute/groupkey.newGroupKey
   47.17MB  4.37% 82.55%    47.17MB  4.37%  github.com/influxdata/flux/execute.NewColListTableBuilder
   31.11MB  2.88% 85.43%    56.91MB  5.27%  github.com/influxdata/flux/internal/execute/groupkey.(*Lookup).createOrSetInGroup
   25.80MB  2.39% 87.82%    25.80MB  2.39%  github.com/influxdata/flux/internal/execute/groupkey.(*Lookup).newKeyGroup (inline)
   20.60MB  1.91% 89.72%    20.60MB  1.91%  github.com/influxdata/flux/interval.Window.GetOverlappingBounds
   17.39MB  1.61% 91.33%    17.39MB  1.61%  github.com/influxdata/flux/execute.NewTriggerFromSpec
File: influxd
Type: goroutine
Time: Sep 28, 2021 at 8:16pm (MDT)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) text
Showing nodes accounting for 1580, 99.75% of 1584 total
Dropped 101 nodes (cum <= 7)
Showing top 10 nodes out of 34
      flat  flat%   sum%        cum   cum%
      1580 99.75% 99.75%       1580 99.75%  runtime.gopark
         0     0% 99.75%         42  2.65%  bytes.(*Buffer).ReadFrom
         0     0% 99.75%         42  2.65%  crypto/tls.(*Conn).Read
         0     0% 99.75%         42  2.65%  crypto/tls.(*Conn).readFromUntil
         0     0% 99.75%         42  2.65%  crypto/tls.(*Conn).readRecord (inline)
         0     0% 99.75%         42  2.65%  crypto/tls.(*Conn).readRecordOrCCS
         0     0% 99.75%         42  2.65%  crypto/tls.(*atLeastReader).Read
         0     0% 99.75%       1024 64.65%  github.com/influxdata/influxdb/v2/query/control.(*Controller).processQueryQueue
         0     0% 99.75%       1024 64.65%  github.com/influxdata/influxdb/v2/query/control.New.func1
         0     0% 99.75%        128  8.08%  github.com/influxdata/influxdb/v2/task/backend/scheduler.(*TreeScheduler).work

My CPU looks like this:

    PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
2185691 influxdb   20   0 19.5G 4384M  109M S 303. 55.2  1h58:39 /usr/bin/influxd
2185774 influxdb   20   0 19.5G 4384M  109M R 99.8 55.2 22:16.45 /usr/bin/influxd
2185782 influxdb   20   0 19.5G 4384M  109M R 99.8 55.2 20:24.24 /usr/bin/influxd
2185783 influxdb   20   0 19.5G 4384M  109M R 99.8 55.2 21:19.24 /usr/bin/influxd
2185777 influxdb   20   0 19.5G 4384M  109M S  4.0 55.2  1:54.92 /usr/bin/influxd

I think I solved it. See my VM CPU below:

Screen Shot 2021-09-29 at 12.41.02

That drop was when I deleted two Alert rules I have defined in Grafana 8 and restarted Influxd and Grafana. Something in the alert rules seem to have caused Influxd to go crazy. Anyone have any insights?