Selecting data casuing swapping for bigger time ranges affecting our POC

we have loaded tickdata for 2 years with separate measurement per symbol. ( no tags defined)

select 3 fields causing enormous swapping of about 6 GBS and ultimately our PCO host with 24 GB ram is swapping heavily we are using influxdb 1.4.2

eg :

select “bid-price”,“ask-size”,“last-price” from “” where time >=‘2016-01-01T00:00:00Z’ and time <= now() limit 0

we have no tags and assuiming time is a tag so this query should not do table scan !

I was wondering if its possible to limit the memory overtake by a single query and enforce data results pushed out on a per chunk basis.

I thought to avoid a table scan we need to use time ranges even if its a huge range !

Can any one from Engg please confirm how to allow simultaneous queries from diff clients which queries full data ranges and enforce real chunks like a “real” dbms does !

This query is needed to draw graphs for the full range rather than small time ranges at a time.
sample query :

select * from "E:AAPL" where time >='2016-01-01T00:00:00Z' and time <= '2017-12-30T03:00:00Z' order by time

total database size 26 GB for the POC.

This is seriously affecting our future decision about using influxdb !

Any suggestion is really welcome

Finally managed to get the data by querying in 7 day chunks.
But is unfortunately it shows influxdb is not suitable for “backltesting” or for CQG style graphing !
I am still hopeful that if we put some real tags in the data , we may be able to pull this off, right now no tags is in this “tick” data .