InfluxDB high CPU usage after writing

Hello to everyone!
Input data:

  • InfluxDB 2.7.0 working in docker container.
  • DB has write and read operations from different services and telegraf.
  • Target bucket with retention for 7 days.

Everything works fine unit I want to write some more data.

Problem
After writing 675 points at once by line protocol (this write operation does not differ from stable service work):
format: measurement _field= epoch_seconds

data 7554e18b-f031-4777-a427-24301c7a0989=30.0 1712666633 
data 4279dbf1-e230-419f-99b7-7821f3ceb68a=50.0 1712666633 
data 16516605-7df0-4c82-8826-39e70bffe445=5.0 1712666633  
data 50951fdf-0730-4e96-ba8c-69feabfc3167=10.0 1712666633 
data afca57d5-a4cc-4bf8-8e2f-973dbfc75b47=1.0 1712666633
// more data

I have high CPU usage % for half an hour - 140-160%, usual is less than 1%.
You can see CPU usage on image:

Result of `go tool pprof http://<my_host>:8086/debug/pprof/profile?seconds=120` and `top 25`
      flat  flat%   sum%        cum   cum%
    12.27s  7.19%  7.19%     14.16s  8.30%  runtime.findObject
    10.09s  5.91% 13.10%     73.91s 43.30%  runtime.mallocgc
     9.90s  5.80% 18.90%     12.49s  7.32%  runtime.step
     9.52s  5.58% 24.48%     26.84s 15.73%  runtime.pcvalue
     7.47s  4.38% 28.86%     40.46s 23.71%  runtime.scanobject
     7.07s  4.14% 33.00%     15.64s  9.16%  runtime.greyobject
     6.89s  4.04% 37.03%     38.61s 22.62%  runtime.gentraceback
     6.61s  3.87% 40.91%      6.80s  3.98%  runtime.pageIndexOf (inline)
     4.23s  2.48% 43.39%      4.23s  2.48%  runtime.nextFreeFast (inline)
     3.36s  1.97% 45.35%      3.36s  1.97%  runtime.(*moduledata).textAddr
     2.76s  1.62% 46.97%      2.98s  1.75%  runtime.findfunc
     2.74s  1.61% 48.58%      6.64s  3.89%  runtime.heapBitsSetType
     2.74s  1.61% 50.18%      2.74s  1.61%  runtime.memclrNoHeapPointers
     2.59s  1.52% 51.70%      2.59s  1.52%  runtime.readvarint (inline)
     2.51s  1.47% 53.17%      2.65s  1.55%  runtime.writeHeapBits.flush
     2.40s  1.41% 54.58%      2.48s  1.45%  runtime.heapBitsForAddr
     2.27s  1.33% 55.91%      7.86s  4.61%  github.com/google/flatbuffers/go.(*Builder).WriteVtable
     1.95s  1.14% 57.05%      1.96s  1.15%  runtime.(*fixalloc).alloc
     1.85s  1.08% 58.13%      2.84s  1.66%  github.com/influxdata/flux/internal/execute/groupkey.groupKeyLess
     1.84s  1.08% 59.21%      3.14s  1.84%  runtime.(*sweepLocked).sweep
     1.46s  0.86% 60.07%     17.48s 10.24%  github.com/google/flatbuffers/go.(*Builder).Prep
     1.38s  0.81% 60.87%      4.74s  2.78%  runtime.funcInfo.entry (partial-inline)
     1.37s   0.8% 61.68%      5.29s  3.10%  github.com/influxdata/flux/values.(*object).Range
     1.17s  0.69% 62.36%      1.37s   0.8%  runtime.markrootSpans
     1.16s  0.68% 63.04%      1.17s  0.69%  runtime.spanOf (inline)

What is wrong with influxdb and how can i reduce CPU usage?

Possible crutch solution:
Writing data by parts during one hour.
For example 50 points for each 5 minutes.
In this case i have some local CPU peaks 50-100% for 1-2 minutes - this is acceptable.