So one of the cool new things is these “Scrapers”. However, after configuration you have no visibility into the status of the scraper. Is it working? Is there an error? Where do I get details into this? So far as I can tell, there is none. It’s also not logged by default (running via Docker).
I would suggest that there be some more information in the webUI as to what the scraper is doing and how frequently. If you know what data to look for (or maybe using a new bucket), you can determine if something is going wrong - but not why.
EDIT: I found that sometimes it is logged. But again, not in the WebUI.
Additionally there should be an option to ignore certificate check failures (aka curl -k) for scraping.