Get data from Thingworx via API in json format and push it to InfluxDB

Hi, I’m fairly new to using Thingworx API and API in general.

I want to get data from Thingworx using API get json format, prepare data in python and then push it to influxDB and finally to Grafana for vizualisation. My ideas are not clear on how to architecture this ? where to preprocess data , vscode or influxDB ? and then how to automate data fetching? As i understand API call to Thingworx returns data at time of request…


@Lory_96 There’s a few ways you could do this. If you’re set on using Python, you can convert your JSON payload into a dataframe and then use the InfluxDB Python client library to write the data to InfluxDB.

If you’re using InfluxDB v2, this would also be possible via Flux tasks. The task would routinely hit the Thingworx API, convert the JSON body into usable data, then write the data to a bucket.

Hi Scott, thanks for your detailed answer. I’ll havea look on Flux tasks. But just had a question in my mind, what is the difference between using InfluxDB to ingest JSON data or using API plugin like JSON API plugin for Grafana | Grafana Labs to ingest data directly into Grafana ? Where does Grafana store data feter ingestion? sorry if this went off topic a bit


The answer to this leads into your second question, but it comes down to what type of data you’re ingesting (time series vs logging). If you’re ingesting time series data, which I assume you are since you’re considering InfluxDB, into Grafana, Grafana is going to store it in it’s own time series database called Grafana Mimir. Here’s a comparison of InfluxDB and Mimir. Mimir still has to be run as a separate process, so you still have to go through the process of setting up a database instance.

If you don’t have a Mimir or InfluxDB datastore, I’m not sure where Grafana stores ingested data. I assume it would hold it in memory, but this isn’t something you can bet on long-term. If it’s data you want to keep, it needs to be persisted somewhere.

Again, this depends on what type of data you’re ingesting. If it’s time series data, it’ll get stored in a Grafana Mimir database. If it’s log data, it’ll get stored in a Grafana Loki, which is specifically designed for logs. I don’t know the ins-and-outs of these offerings so I can’t speak to specific locations on disk. Hopefully this answers your question.