Get the data from an HTTP server into InfluxDB 2.6

Hello everyone,

A bit of context, I have implemented an InfluxDB server on my localhost, with DEBIAN OS. It’s working fine with difference sources.

But today I’m trying something new. I’m sending from my Arduino some data gather by my sensors to an HTTP server to a specific IP. When I open a browser and enter the IP, I can visualize the data gather by my Arduino.

What I would like to do is to send those data to my InfluxDB server, but I tried few things and I feel I’m not searching on the right side.

Did someone already did something like this and help me ?

Thanks !

PS : My arduino is not able to use the library of InfluxDB

Hi @Ritch and welcome to the forum.

Node-RED can run on your Debian machine and a “flow” can be created to retrieve the data from the Arduino and send to InfluxDB. Have you considered or tried this yet?

Hi @grant1 thank you !

Not considered yet, I will take a look !

Hi @Ritch,
Aswell as @grant1’s suggestion (Which I also recommend). if you need a lightweight option you could also check out the telegraf http listener plugin: HTTP Listener v2 Monitoring | InfluxData

Hi @Jay_Clifford ,
I tried to use a telegraf http listener, but I think I didn’t quite understand how I should make it work.

Example :

Data sent from my Arduino at this address :

I’m sending the data in this format : “Temperature 1: 22.87 Temperature 2: 23.31 Voltage: 1.75 PH: 6.61”

And now I want to configure the telegraf listener, which should be something like this :

#Generic HTTP write listener

## Address and port to host HTTP listener on 
service_address ""

## Paths to listen to.
# paths ["/telegraf"]

## Save path as http_listener_v2_path tag if set to true
# path_tag = false

## HTTP methods to accept.

methods = ["POST", "PUT", "GET"]

## maximum duration before timing out read of the request 
# read_timeout = "10s"
## maximum duration before timing out write of the response
#write_timeout = "10s"

## Maximum allowed http request body size in bytes.
## 0 means to use the default of 524,288,000 bytes (500 mebibytes) #max_body_size="500MB"

## Part of the request to consume.
#data_source= "body"

## Set one or more allowed client CA certificate file names to 
## enable mutually authenticated TLS connections
# tls_allowed_cacerts = ["/etc/telegraf/clientca.pem"]

## Add service certificate and key
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"

## Optional username and password to accept for HTTP basic authentication.
## You probably want to make sure you have TLS configured above for this.
#basic username ="foobar"
# basic_password ="barfoo"

## Optional setting to map http headers into tags
## If the http header is not present on the request, no corresponding tag will be added
## If multiple instances of the http header are present, only the first value will be us
# http_header_tags = {"HTTP_HEADER" = "TAG_NAME"}

## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
data format = "influx"

But then when I click on listen data, there is an error that shows.

I suspect that I need to put the API token somewhere but I don’t know where in this case.

Thank you !

I find my solution and I wanted to share it.

I use a python script to make everything work.


import requests
response = requests.get("")

data = response.text.split()
temp1 = data[2]

from datetime import datetime
import influxdb_client
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
#You can generate a Token from the "token tab" in the UI 
token = "your token"
org = "your org"
bucket ="testbucket"
client influxdb_client. InfluxDBClient (url="http://localhost:8086", token=token)
write_api = client.write_api(write_options=SYNCHRONOUS)
p = influxdb_client. Point ("Measure").tag("location", "moon").field("temp1", float (temp1)) 
write_api.write(bucket-bucket, org=org, record=p)

Sorry for the late reply here @Ritch! glad you found a solution.

1 Like