Java 8如何统计List的词频?
List <String> wordsList = Lists.newArrayList("hello", "bye", "ciao", "bye", "ciao");
结果必须是:
{ciao=2, hello=1, bye=2}
原文由 Mouna 发布,翻译遵循 CC BY-SA 4.0 许可协议
Java 8如何统计List的词频?
List <String> wordsList = Lists.newArrayList("hello", "bye", "ciao", "bye", "ciao");
结果必须是:
{ciao=2, hello=1, bye=2}
原文由 Mouna 发布,翻译遵循 CC BY-SA 4.0 许可协议
( 注意:请参阅下面的编辑)
作为 Mounas answer 的替代方法,这里有一种并行计算字数的方法:
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
public class ParallelWordCount
{
public static void main(String[] args)
{
List<String> list = Arrays.asList(
"hello", "bye", "ciao", "bye", "ciao");
Map<String, Integer> counts = list.parallelStream().
collect(Collectors.toConcurrentMap(
w -> w, w -> 1, Integer::sum));
System.out.println(counts);
}
}
编辑 作为对评论的回应,我用 JMH 进行了一个小测试,比较了
toConcurrentMap
和groupingByConcurrent
方法,具有不同的输入列表大小和不同长度的随机词。该测试表明toConcurrentMap
方法更快。在考虑这些方法“在幕后”有何不同时,很难预测这样的事情。作为进一步的扩展,基于进一步的评论,我扩展了测试以涵盖
toMap
、groupingBy
、串行和并行的所有四种组合。结果仍然是
toMap
方法更快,但出乎意料的是(至少对我而言)两种情况下的“并发”版本都比串行版本慢……:
(method) (count) (wordLength) Mode Cnt Score Error Units
toConcurrentMap 1000 2 avgt 50 146,636 ± 0,880 us/op
toConcurrentMap 1000 5 avgt 50 272,762 ± 1,232 us/op
toConcurrentMap 1000 10 avgt 50 271,121 ± 1,125 us/op
toMap 1000 2 avgt 50 44,396 ± 0,541 us/op
toMap 1000 5 avgt 50 46,938 ± 0,872 us/op
toMap 1000 10 avgt 50 46,180 ± 0,557 us/op
groupingBy 1000 2 avgt 50 46,797 ± 1,181 us/op
groupingBy 1000 5 avgt 50 68,992 ± 1,537 us/op
groupingBy 1000 10 avgt 50 68,636 ± 1,349 us/op
groupingByConcurrent 1000 2 avgt 50 231,458 ± 0,658 us/op
groupingByConcurrent 1000 5 avgt 50 438,975 ± 1,591 us/op
groupingByConcurrent 1000 10 avgt 50 437,765 ± 1,139 us/op
toConcurrentMap 10000 2 avgt 50 712,113 ± 6,340 us/op
toConcurrentMap 10000 5 avgt 50 1809,356 ± 9,344 us/op
toConcurrentMap 10000 10 avgt 50 1813,814 ± 16,190 us/op
toMap 10000 2 avgt 50 341,004 ± 16,074 us/op
toMap 10000 5 avgt 50 535,122 ± 24,674 us/op
toMap 10000 10 avgt 50 511,186 ± 3,444 us/op
groupingBy 10000 2 avgt 50 340,984 ± 6,235 us/op
groupingBy 10000 5 avgt 50 708,553 ± 6,369 us/op
groupingBy 10000 10 avgt 50 712,858 ± 10,248 us/op
groupingByConcurrent 10000 2 avgt 50 901,842 ± 8,685 us/op
groupingByConcurrent 10000 5 avgt 50 3762,478 ± 21,408 us/op
groupingByConcurrent 10000 10 avgt 50 3795,530 ± 32,096 us/op
我对 JMH 不是很有经验,也许我在这里做错了什么——欢迎提出建议和更正:
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import java.util.stream.Collectors;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Param;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.Setup;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.infra.Blackhole;
@State(Scope.Thread)
public class ParallelWordCount
{
@Param({"toConcurrentMap", "toMap", "groupingBy", "groupingByConcurrent"})
public String method;
@Param({"2", "5", "10"})
public int wordLength;
@Param({"1000", "10000" })
public int count;
private List<String> list;
@Setup
public void initList()
{
list = createRandomStrings(count, wordLength, new Random(0));
}
@Benchmark
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MICROSECONDS)
public void testMethod(Blackhole bh)
{
if (method.equals("toMap"))
{
Map<String, Integer> counts =
list.stream().collect(
Collectors.toMap(
w -> w, w -> 1, Integer::sum));
bh.consume(counts);
}
else if (method.equals("toConcurrentMap"))
{
Map<String, Integer> counts =
list.parallelStream().collect(
Collectors.toConcurrentMap(
w -> w, w -> 1, Integer::sum));
bh.consume(counts);
}
else if (method.equals("groupingBy"))
{
Map<String, Long> counts =
list.stream().collect(
Collectors.groupingBy(
Function.identity(), Collectors.<String>counting()));
bh.consume(counts);
}
else if (method.equals("groupingByConcurrent"))
{
Map<String, Long> counts =
list.parallelStream().collect(
Collectors.groupingByConcurrent(
Function.identity(), Collectors.<String> counting()));
bh.consume(counts);
}
}
private static String createRandomString(int length, Random random)
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < length; i++)
{
int c = random.nextInt(26);
sb.append((char) (c + 'a'));
}
return sb.toString();
}
private static List<String> createRandomStrings(
int count, int length, Random random)
{
List<String> list = new ArrayList<String>(count);
for (int i = 0; i < count; i++)
{
list.add(createRandomString(length, random));
}
return list;
}
}
对于具有 10000 个元素和 2 个字母的单词的列表的连续情况,时间仅相似。
可能值得检查对于更大的列表大小,并发版本最终是否优于串行版本,但目前没有时间使用所有这些配置进行另一次详细的基准测试。
原文由 Marco13 发布,翻译遵循 CC BY-SA 3.0 许可协议
4 回答1.3k 阅读✓ 已解决
4 回答1.2k 阅读✓ 已解决
1 回答2.6k 阅读✓ 已解决
2 回答731 阅读✓ 已解决
2 回答1.7k 阅读
2 回答1.7k 阅读
2 回答1.3k 阅读
我想分享我找到的解决方案,因为起初我希望使用 map-and-reduce 方法,但它有点不同。
或者对于整数值:
编辑
我添加了如何按值对地图进行排序: