I have the following dataframe
|-- userId: string
|-- product: string
|-- rating: double
val result = sqlContext.sql("select userId, collect_list(product), collect_list(rating) from data group by userId")
I believe there is no explicit guarantee that all arrays will have the same order. Spark SQL uses multiple optimizations and under certain conditions there is no guarantee that all aggregations are scheduled at the same time (one example is aggregation with
DISTINCT). Since exchange (shuffle) results in nondeterministic order it is theoretically possible that order will differ.
So while it should work in practice it could be risky and introduce some hard to detect bugs.
If you Spark 2.0.0 or later you can aggregate non-atomic columns with
SELECT userId, collect_list(struct(product, rating)) FROM data GROUP BY userId
If you use an earlier version you can try to use explicit partitions and order:
WITH tmp AS ( SELECT * FROM data DISTRIBUTE BY userId SORT BY userId, product, rating ) SELECT userId, collect_list(product), collect_list(rating) FROM tmp GROUP BY userId