I have a set of data similar to this:
# Start_Time End_Time Call_Type Info
1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms
2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms
3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms
3 13:19:12.754 Ping3_2 RTT(Avr):210ms
4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms
5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms
6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms
...
When I parse through it, I need to merge the results of Ping3_1
and Ping3_2
, take the average of those two rows, and export that as one row, so the end of result would be like this:
# Start_Time End_Time Call_Type Info
1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms
2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms
3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms
4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms
5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms
6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms
...
Currently, I am concatenating columns 0 and 1 to make a unique key, finding the duplication there, then doing the rest of the special treatment for those parallel pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!