Influxdb Data not showing up anymore (was working but broke)

Hello

I am pulling some data from ethermine.org and also luno.com(luno I am using a wrapper provided by them GitHub - luno/luno-python: Python SDK for the Luno API and the ethermine calls I wrote myself) using a python script and telegraf, storing in influxdb and displaying it using grafana. I had it working until yesterday it stopped working and now my fields and measurements are not showing up on the influxDB UI, nor on grafana anymore. I tried checking the logs with sudo journalctl -u influxdb.service , but it doesn’t give me any useful info(I think?). Running my python script manually or via telegraf
(telegraf --config /etc/telegraf/telegraf.conf --input-filter exec --test) provides the expected output, but the data is just not showing up.

Will provide some screenshots of the log…if someone could please help me troubleshoot or tell me some tests I can run in order to determine where the fault has occurred I would be grateful.
I tried running influxd --log-level=debug, but the log still shows as lvl=info (see screenshot).


Here is my telegraf config:

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Logging configuration:
  ## Run telegraf with debug log messages.
  debug = false
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]	
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ## urls exp: http://127.0.0.1:8086
  urls = ["xxxxx"]

  ## Token for authentication.
  token = "xxxxxx"

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "altus"

  ## Destination bucket to write into.
  bucket = "altus"
[[inputs.cpu]]
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics.
  collect_cpu_time = false
  ## If true, compute and report the sum of all non-idle CPU states.
  report_active = false
[[inputs.disk]]
  ## By default stats will be gathered for all mount points.
  ## Set mount_points will restrict the stats to only the specified mount points.
  # mount_points = ["/"]
  ## Ignore mount points by filesystem type.
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

[[inputs.exec]]
commands = ["python3 /usr/local/bin/main.py"]
data_format = "json"
interval = "120s"
name_suffix = "-ethermine"
name_override = "my_metric"

Here is my main.py python script:

import json
import requests

def Merge(dict1, dict2):
  res = {**dict1, **dict2}
  return res

from luno_python.client import Client

c = Client(api_key_id='xxxxx', api_key_secret='xxxxxxxx')
try:
    result = c.get_ticker(pair='ETHZAR')
except Exception as e:
    print(e)


# Miner - Statistics Altus
keyAltus = 'xxxx'
urlAltus = 'https://api.ethermine.org/miner/' + keyAltus + '/currentStats'

# PoolStats
urlPoolStats = 'https://api.ethermine.org' + '/poolStats'


statsAltus = requests.get(urlAltus)
statsPool = requests.get(urlPoolStats)

for key, value in result.items():
    if type(value) is str:
        try:
            float_value = float(value)
        except:
            pass
        else:
            result[key] = float_value

mergedDict = Merge(statsAltus.json()['data'],statsPool.json()['data'])
mergedDict2 = Merge(mergedDict,result)
print(json.dumps(mergedDict2))

Ok so I did some more troubleshooting and it seems to break when I enter the following code:

from luno_python.client import Client

c = Client(api_key_id='luno id goes here', api_key_secret='supersecretkeyhere')
try:
  result = c.get_ticker(pair='ETHZAR')
  #print (res['bid'][:-9])
except Exception as e:
  print (e)

Any ideas? It works without that in my script, got the library wrapper from here [GitHub - luno/luno-python: Python SDK for the Luno API](GitHub - luno/luno-python: Python SDK for the Luno API

bumping post, still haven’t managed to solve it

@Altus I’m not sure but maybe @bednar has some insights?
Thank you.