Telegraf JSON input format Question

Hello

I have written a python script, which calls the API of ethermine.org, I then use this script in my telegraf config to send the data to influxdb. My first API call worked fine, however, I am having some trouble with a formatting issue I think with another call, have been trying for a few hours, but am making little headway. I will link a question I asked on stack overflow which is regarding the same issue (python - Reformatting a JSON API Response From Ethermine.org - Stack Overflow) I thought I solved the issue, but it doesn’t seem to work. Here is my full python code:

import json
import requests



# Miner - Statistics Altus
keyAltus = '927fC5a85304eB3c181e27971B284c625D3cBbE9'
urlAltus = 'https://api.ethermine.org/miner/' + keyAltus + '/currentStats'

# PoolStats
urlPoolStats = 'https://api.ethermine.org' + '/poolStats'


statsAltus = requests.get(urlAltus)
statsPool = requests.get(urlPoolStats)

json_statsAltus = json.dumps(statsAltus.json()['data'])
print(json_statsAltus)

json_statsPool = json.dumps(statsPool.json()['data'])
print(json_statsPool)

Here is the output of the python script:

{"time": 1614606000, "lastSeen": 1614605949, "reportedHashrate": 36529025, "currentHashrate": 41833333.333333336, "validShares": 37, "invalidShares": 0, "staleShares": 1, "averageHashrate": 35104166.66666665, "activeWorkers": 1, "unpaid": 6540063810106981, "unconfirmed": null, "coinsPerMin": 1.222612368565292e-06, "usdPerMin": 0.0018655964393174647, "btcPerMin": 3.897688230986151e-08}
{"topMiners": [], "minedBlocks": [{"number": 11952655, "miner": "3b1964b93e16e16dc4b29b8a43c456089e60666f", "time": 1614606357}, {"number": 11952653, "miner": "68e9c48084e0bb60f454bb007d9058870ad71e6a", "time": 1614606296}, {"number": 11952650, "miner": "58f1Ea8Ae31878157aE2D54D4A68A3b87ea21Fa7", "time": 1614606259}, {"number": 11952646, "miner": "67e3dD6e50901cA1d60f63c91c142622b4D7015E", "time": 1614606179}, {"number": 11952632, "miner": "8d3966B04B1Cba40598526aC53B46B72281A9f87", "time": 1614605982}, {"number": 11952631, "miner": "80f77e45fb17870A030a8f631de444e2218b555b", "time": 1614605950}, {"number": 11952630, "miner": "93b94364845A31D960002962125880b6DE15f5eA", "time": 1614605943}, {"number": 11952627, "miner": "aa9f1b772b9a5f64629ba1a44899cefdbd6466b1", "time": 1614605938}, {"number": 11952617, "miner": "fdeb381e8c65521674897562916a368fc418a4ff", "time": 1614605839}, {"number": 11952612, "miner": "01139F82659A3bD56D1f051D57D4Bc96a3b9Ef05", "time": 1614605733}], "poolStats": {"hashRate": 83758921222213.8, "miners": 170498, "workers": 518508, "blocksPerHour": 58.17}, "price": {"usd": 1525.91, "btc": 0.03188}}

Using https://jsonlint.com/ both of them are valid JSON, so I don’t quite understand why the first print(ie the one that starts with {“time”} works, but then when I add the second one I get an error like the following:

2021-03-01T14:07:29Z E! [inputs.exec] Error in plugin: invalid character '{' after top-level value
2021-03-01T14:07:29Z E! [telegraf] Error running agent: input plugins recorded 1 errors

Just thought of something: so I commented out the print(json_statsAltus) and just used the “problematic” print(json_statsPool) , and then it ran through telegraf…so I think this proves it is not an issue with my python output, but rather something I am doing wrong in telegraf config? I think maybe I don’t know how to take multiple inputs into telegraf which might be the issue.
Edit: added my telegraf config
Do I have you have two seperate python programs and seperate [[inputs.exec]] or not, I would assume not otherwise it would be rather cumbersome.

# Configuration for telegraf agent
[agent]
  ## Default data collection interval for all inputs
  interval = "10s"
  ## Rounds collection interval to 'interval'
  ## ie, if interval="10s" then always collect on :00, :10, :20, etc.
  round_interval = true

  ## Telegraf will send metrics to outputs in batches of at most
  ## metric_batch_size metrics.
  ## This controls the size of writes that Telegraf sends to output plugins.
  metric_batch_size = 1000

  ## For failed writes, telegraf will cache metric_buffer_limit metrics for each
  ## output, and will flush this buffer on a successful write. Oldest metrics
  ## are dropped first when this buffer fills.
  ## This buffer only fills when writes fail to output plugin(s).
  metric_buffer_limit = 10000

  ## Collection jitter is used to jitter the collection by a random amount.
  ## Each plugin will sleep for a random time within jitter before collecting.
  ## This can be used to avoid many plugins querying things like sysfs at the
  ## same time, which can have a measurable effect on the system.
  collection_jitter = "0s"

  ## Default flushing interval for all outputs. Maximum flush_interval will be
  ## flush_interval + flush_jitter
  flush_interval = "10s"
  ## Jitter the flush interval by a random amount. This is primarily to avoid
  ## large write spikes for users running a large number of telegraf instances.
  ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
  flush_jitter = "0s"

  ## By default or when set to "0s", precision will be set to the same
  ## timestamp order as the collection interval, with the maximum being 1s.
  ##   ie, when interval = "10s", precision will be "1s"
  ##       when interval = "250ms", precision will be "1ms"
  ## Precision will NOT be used for service inputs. It is up to each individual
  ## service input to set the timestamp at the appropriate precision.
  ## Valid time units are "ns", "us" (or "µs"), "ms", "s".
  precision = ""

  ## Logging configuration:
  ## Run telegraf with debug log messages.
  debug = false
  ## Run telegraf in quiet mode (error log messages only).
  quiet = false
  ## Specify the log file name. The empty string means to log to stderr.
  logfile = ""

  ## Override default hostname, if empty use os.Hostname()
  hostname = ""
  ## If set to true, do no set the "host" tag in the telegraf agent.
  omit_hostname = false
[[outputs.influxdb_v2]]	
  ## The URLs of the InfluxDB cluster nodes.
  ##
  ## Multiple URLs can be specified for a single cluster, only ONE of the
  ## urls will be written to each interval.
  ## urls exp: http://127.0.0.1:8086
  urls = ["xxxx"]

  ## Token for authentication.
  token = ""

  ## Organization is the name of the organization you wish to write to; must exist.
  organization = "altus"

  ## Destination bucket to write into.
  bucket = "altus"
[[inputs.cpu]]
  ## Whether to report per-cpu stats or not
  percpu = true
  ## Whether to report total system cpu stats or not
  totalcpu = true
  ## If true, collect raw CPU time metrics.
  collect_cpu_time = false
  ## If true, compute and report the sum of all non-idle CPU states.
  report_active = false
[[inputs.disk]]
  ## By default stats will be gathered for all mount points.
  ## Set mount_points will restrict the stats to only the specified mount points.
  # mount_points = ["/"]
  ## Ignore mount points by filesystem type.
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]
[[inputs.diskio]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.processes]]
[[inputs.swap]]
[[inputs.system]]

[[inputs.exec]]
commands = ["python3 /usr/local/bin/main.py"]
data_format = "json"
interval = "120s"
name_suffix = "-ethermine"
name_override = "my_metric"

Something that seems to work is:

def Merge(dict1, dict2):
  res = {**dict1, **dict2}
  return res

mergedDict = Merge(statsAltus.json()['data'],statsPool.json()['data'])
print(json.dumps(mergedDict))

This combines the two dictionaries and then outputs it as one JSON object, but I would like to know if there is a better way to solve this as if I keep adding more API calls it might get a bit cumbersome to keep merging dicts together…and then also what would happen if for instance I have 2 miners that mine to 2 different wallet addresses, but the API call is the same so the dicts would have the same fields(same key names, but different values? not sure on the terminology).