Stream Tickscript - Why do i see duplicate values

With the following tickscript why do I see duplicate values with same timestamp? Is this expected or is this because of line protocol?
I used median function to see if it handle the duplicates and it does, i get the correct median for even and odd numbers. Please confirm if there is anything i need to worry about.

var data = stream
    |from()
        .database(db)
        .retentionPolicy(rp)
        .measurement(measurement)
        .groupBy(groupBy)
        .where(whereFilter)
    |httpOut('line_22')       // Here i see only value
    |window()
        .period(2m)
        .every(2m)
    |where(lambda: isPresent("Percent_Processor_Time"))
    |eval(lambda: "Percent_Processor_Time")
        .as('Percent_Processor_Time_alias')
    |log()
    |httpOut('line_31')      // Here i see 4 values being repeated thus for 2 min total values

Example:

No Duplicates debug from httpOut(‘line_22’)

{
    "series": [
        {
            "name": "Processor",
            "tags": {
                "host": "test_machine",
                "instance": "_Total",
                "objectname": "Processor"
            },
            "columns": [
                "time",
                "Percent_Processor_Time"
            ],
            "values": [
                [
                    "2019-10-18T10:08:00Z",
                    16.348915100097656
                ]
            ]
        }
    ]
}

Duplicate 4 times debug from httpOut(‘line_31’)

{
    "series": [
        {
            "name": "Processor",
            "tags": {
                "host": "test_machine",
                "instance": "_Total"
            },
            "columns": [
                "time",
                "Percent_Processor_Time_alias",
                "objectname"
            ],
            "values": [
                [
                    "2019-10-18T10:08:00Z",
                    16.348915100097656,
                    "Processor"
                ],
                [
                    "2019-10-18T10:08:00Z",
                    16.348915100097656,
                    "Processor"
                ],
                [
                    "2019-10-18T10:08:00Z",
                    16.348915100097656,
                    "Processor"
                ],
                [
                    "2019-10-18T10:08:00Z",
                    16.348915100097656,
                    "Processor"
                ],
                [
                    "2019-10-18T10:09:00Z",
                    17.49864959716797,
                    "Processor"
                ],
                [
                    "2019-10-18T10:09:00Z",
                    17.49864959716797,
                    "Processor"
                ],
                [
                    "2019-10-18T10:09:00Z",
                    17.49864959716797,
                    "Processor"
                ],
                [
                    "2019-10-18T10:09:00Z",
                    17.49864959716797,
                    "Processor"
                ]
            ]
        }
    ]
}

Here is how the log looks like:
ts=2019-10-18T15:39:59.983+05:30 lvl=info msg=“batch point” service=kapacitor task_master=main task=test_median node=log6 prefix= name=Processor group=host=test_machine,instance=_Total tag_host=test_machine tag_instance=_Total tag_objectname=Processor field_Percent_Processor_Time_alias=17.49864959716797 time=2019-10-18T10:09:00Z
ts=2019-10-18T15:39:59.983+05:30 lvl=info msg=“batch point” service=kapacitor task_master=main task=test_median node=log6 prefix= name=Processor group=host=test_machine,instance=_Total tag_host=test_machine tag_instance=_Total tag_objectname=Processor field_Percent_Processor_Time_alias=17.49864959716797 time=2019-10-18T10:09:00Z
ts=2019-10-18T15:39:59.983+05:30 lvl=info msg=“batch point” service=kapacitor task_master=main task=test_median node=log6 prefix= name=Processor group=host=test_machine,instance=_Total tag_host=test_machine tag_instance=_Total tag_objectname=Processor field_Percent_Processor_Time_alias=17.49864959716797 time=2019-10-18T10:09:00Z
ts=2019-10-18T15:39:59.983+05:30 lvl=info msg=“batch point” service=kapacitor task_master=main task=test_median node=log6 prefix= name=Processor group=host=test_machine,instance=_Total tag_host=test_machine tag_instance=_Total tag_objectname=Processor field_Percent_Processor_Time_alias=17.49864959716797 time=2019-10-18T10:09:00Z

Similar issues reported:

Question is:
Why do we see duplicate values with same timestamp?
How kapacitor handles them?

Thanks