/* * Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved. * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. * * This code is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License version 2 only, as * published by the Free Software Foundation. Oracle designates this * particular file as subject to the "Classpath" exception as provided * by Oracle in the LICENSE file that accompanied this code. * * This code is distributed in the hope that it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * version 2 for more details (a copy is included in the LICENSE file that * accompanied this code). * * You should have received a copy of the GNU General Public License version * 2 along with this work; if not, write to the Free Software Foundation, * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. * * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA * or visit www.oracle.com if you need additional information or have any * questions. */ package java.util.stream; import java.util.Collections; import java.util.EnumSet; import java.util.Objects; import java.util.Set; import java.util.function.BiConsumer; import java.util.function.BinaryOperator; import java.util.function.Function; import java.util.function.Supplier; /** * A mutable reduction operation that * accumulates input elements into a mutable result container, optionally transforming * the accumulated result into a final representation after all input elements * have been processed. Reduction operations can be performed either sequentially * or in parallel. * *
Examples of mutable reduction operations include: * accumulating elements into a {@code Collection}; concatenating * strings using a {@code StringBuilder}; computing summary information about * elements such as sum, min, max, or average; computing "pivot table" summaries * such as "maximum valued transaction by seller", etc. The class {@link Collectors} * provides implementations of many common mutable reductions. * *
A {@code Collector} is specified by four functions that work together to * accumulate entries into a mutable result container, and optionally perform * a final transform on the result. They are:
Collectors also have a set of characteristics, such as * {@link Characteristics#CONCURRENT}, that provide hints that can be used by a * reduction implementation to provide better performance. * *
A sequential implementation of a reduction using a collector would * create a single result container using the supplier function, and invoke the * accumulator function once for each input element. A parallel implementation * would partition the input, create a result container for each partition, * accumulate the contents of each partition into a subresult for that partition, * and then use the combiner function to merge the subresults into a combined * result. * *
To ensure that sequential and parallel executions produce equivalent * results, the collector functions must satisfy an identity and an * associativity constraints. * *
The identity constraint says that for any partially accumulated result, * combining it with an empty result container must produce an equivalent * result. That is, for a partially accumulated result {@code a} that is the * result of any series of accumulator and combiner invocations, {@code a} must * be equivalent to {@code combiner.apply(a, supplier.get())}. * *
The associativity constraint says that splitting the computation must * produce an equivalent result. That is, for any input elements {@code t1} * and {@code t2}, the results {@code r1} and {@code r2} in the computation * below must be equivalent: *
{@code * A a1 = supplier.get(); * accumulator.accept(a1, t1); * accumulator.accept(a1, t2); * R r1 = finisher.apply(a1); // result without splitting * * A a2 = supplier.get(); * accumulator.accept(a2, t1); * A a3 = supplier.get(); * accumulator.accept(a3, t2); * R r2 = finisher.apply(combiner.apply(a2, a3)); // result with splitting * }* *
For collectors that do not have the {@code UNORDERED} characteristic, * two accumulated results {@code a1} and {@code a2} are equivalent if * {@code finisher.apply(a1).equals(finisher.apply(a2))}. For unordered * collectors, equivalence is relaxed to allow for non-equality related to * differences in order. (For example, an unordered collector that accumulated * elements to a {@code List} would consider two lists equivalent if they * contained the same elements, ignoring order.) * *
Libraries that implement reduction based on {@code Collector}, such as * {@link Stream#collect(Collector)}, must adhere to the following constraints: *
In addition to the predefined implementations in {@link Collectors}, the * static factory methods {@link #of(Supplier, BiConsumer, BinaryOperator, Characteristics...)} * can be used to construct collectors. For example, you could create a collector * that accumulates widgets into a {@code TreeSet} with: * *
{@code * Collector* * (This behavior is also implemented by the predefined collector * {@link Collectors#toCollection(Supplier)}). * * @apiNote * Performing a reduction operation with a {@code Collector} should produce a * result equivalent to: *> intoSet = * Collector.of(TreeSet::new, TreeSet::add, * (left, right) -> { left.addAll(right); return left; }); * }
{@code * R container = collector.supplier().get(); * for (T t : data) * collector.accumulator().accept(container, t); * return collector.finisher().apply(container); * }* *
However, the library is free to partition the input, perform the reduction * on the partitions, and then use the combiner function to combine the partial * results to achieve a parallel reduction. (Depending on the specific reduction * operation, this may perform better or worse, depending on the relative cost * of the accumulator and combiner functions.) * *
Collectors are designed to be composed; many of the methods * in {@link Collectors} are functions that take a collector and produce * a new collector. For example, given the following collector that computes * the sum of the salaries of a stream of employees: * *
{@code * Collector* * If we wanted to create a collector to tabulate the sum of salaries by * department, we could reuse the "sum of salaries" logic using * {@link Collectors#groupingBy(Function, Collector)}: * *summingSalaries * = Collectors.summingInt(Employee::getSalary)) * }
{@code * Collector* * @see Stream#collect(Collector) * @see Collectors * * @param> summingSalariesByDept * = Collectors.groupingBy(Employee::getDepartment, summingSalaries); * }
If the characteristic {@code IDENTITY_TRANSFORM} is
* set, this function may be presumed to be an identity transform with an
* unchecked cast from {@code A} to {@code R}.
*
* @return a function which transforms the intermediate result to the final
* result
*/
Function finisher();
/**
* Returns a {@code Set} of {@code Collector.Characteristics} indicating
* the characteristics of this Collector. This set should be immutable.
*
* @return an immutable set of collector characteristics
*/
Set If a {@code CONCURRENT} collector is not also {@code UNORDERED},
* then it should only be evaluated concurrently if applied to an
* unordered data source.
*/
CONCURRENT,
/**
* Indicates that the collection operation does not commit to preserving
* the encounter order of input elements. (This might be true if the
* result container has no intrinsic order, such as a {@link Set}.)
*/
UNORDERED,
/**
* Indicates that the finisher function is the identity function and
* can be elided. If set, it must be the case that an unchecked cast
* from A to R will succeed.
*/
IDENTITY_FINISH
}
}