Hello @tonic8,
Where you able to find your answers? If so, can you please share what you did for the community?
If not, I just found this:
$ influx
Connected to http://localhost:8086 version 1.5.1
InfluxDB shell version: 1.5.1
> use _internal
Using database _internal
> SELECT last("diskBytes") FROM "monitor"."shard" WHERE ("database" =~/internal/) AND time >= now() -1h GROUP BY "database", "path" fill(null)
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/205
time last
---- ----
1523316450000000000 3739493
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/208
time last
---- ----
1523316450000000000 3741307
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/212
time last
---- ----
1523316450000000000 3685745
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/216
time last
---- ----
1523316450000000000 3799445
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/219
time last
---- ----
1523316450000000000 3839261
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/222
time last
---- ----
1523316450000000000 3831156
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/226
time last
---- ----
1523316450000000000 3922560
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/229
time last
---- ----
1523316450000000000 4558010
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/232
time last
---- ----
1523316450000000000 3880661
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/235
time last
---- ----
1523316450000000000 3854180
name: shard
tags: database=_internal, path=/var/lib/influxdb/data/_internal/monitor/240
time last
---- ----
1523316450000000000 26711342
Which matches pretty closely to:
/var/lib/influxdb/data/_internal/monitor # ls
205 208 212 216 219 222 226 229 232 235 240
/var/lib/influxdb/data/_internal/monitor # du -s *
3676 205
3680 208
3624 212
3736 216
3776 219
3768 222
3856 226
4480 229
3816 232
3788 235
9616 240
The reason the last shard will look different is the shard size in Influx reports back the shard plus the WAL size… it is a InfluxDB’ism. I’m not the best at InfluxQL to join and figure out how to minus the WAL size, maybe someone else can do that. 
Remember if a new shard was created, the values in the last hour could appear off. If a new shard creation happened and an old shard dropped in the time frame, it’ll still show old shards.
Inside of Grafana, you can make a graph pretty easily to watch these values.
SELECT "diskBytes" FROM "monitor"."shard" WHERE ("database" = '_internal') AND $timeFilter GROUP BY "path"
You can swap out the database (ie. “_internal”) for your alternate database name.
Also, you could monitor the size on disk on the host… for example if the data is being saved to a /var/lib/influxdb/ directory, this dir could be monitored.
Good to check out is:
$ influxd config
within influx
> SHOW STATS
and
> SHOW DIAGNOSTICS
Hope this helps… someone.