scala 函数式编程 实例

1、遍历 foreach

foreach(f: (A) => Unit): Unit


scala> val a = List(1,2,3,4)
val a: List[Int] = List(1, 2, 3, 4)

scala> a.foreach((x:Int) => {println(x)})
1
2
3
4

scala> a.foreach((x:Int) => println(x)) --类型推断,不需要指定
1
2
3
4

scala> a.foreach(x => println(x))   --类型推断,不需要指定
1
2
3
4

scala> a.foreach(println(_))    --下划线简化函数定义(参数在函数体中只出现一次的情况,且函数体内没有嵌套调用时可使用)
1
2
3
4

2、映射 map

def map[B](f: (A) => B): TraversableOnce[B]
通过一个函数处理后返回指定泛型(B)的新集合

scala> val a = List(1,2,3,4)
val a: List[Int] = List(1, 2, 3, 4)

scala> a.map(x=>x+1)
val res1: List[Int] = List(2, 3, 4, 5)

scala> a.map(_+1)   --下划线简化函数定义(参数在函数体中只出现一次的情况,且函数体内没有嵌套调用时可使用)
val res2: List[Int] = List(2, 3, 4, 5)

scala> a.map[String](x => s"${x}x")
val res5: List[String] = List(1x, 2x, 3x, 4x)


scala> a.map(x => s"${x}a")
val res7: List[String] = List(1a, 2a, 3a, 4a)

3、扁平化映射 flatMap

def flatMap[B](f: (A) => GenTraversableOnce[B]): TraversableOnce[B]
通过一个函数处理后返回指定泛型(B)的新集合

先map,然后flatten

scala> val a = List("hadoop hive spark flink flume", "kudu hbase sqoop storm")
val a: List[String] = List(hadoop hive spark flink flume, kudu hbase sqoop storm)

scala>

scala> a.map(_.split(" "))
val res8: List[Array[String]] = List(Array(hadoop, hive, spark, flink, flume), Array(kudu, hbase, sqoop, storm))

scala>

scala> res8.flatten
val res9: List[String] = List(hadoop, hive, spark, flink, flume, kudu, hbase, sqoop, storm)

scala>

scala> a.flatMap(_.split(" "))
val res10: List[String] = List(hadoop, hive, spark, flink, flume, kudu, hbase, sqoop, storm)

scala>

4、过滤 filter

def filter(p: (A) => Boolean): TraversableOnce[A]
返回过滤后的数据集合,返回函数为true对应的数据

scala> val a = List(1,2,3,4,5,6,7,8,9)
val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9)

scala> a.filter(x => x % 2 == 0)
val res12: List[Int] = List(2, 4, 6, 8)

scala> a.filter(_ % 2 == 0)
val res11: List[Int] = List(2, 4, 6, 8)


5、排序

sorted 默认排序

scala> var a = List(3,1,2,9,7)
var a: List[Int] = List(3, 1, 2, 9, 7)

scala> a.sorted
val res13: List[Int] = List(1, 2, 3, 7, 9)

sortBy 指定字段排序 def sortBy[B](f: (A) => B): List[A]


scala> var a = List("01 hadoop", "02 flume", "03 hive", "04 spark")
var a: List[String] = List(01 hadoop, 02 flume, 03 hive, 04 spark)

scala> a.sortBy(_.split(" ")(1))
val res14: List[String] = List(02 flume, 01 hadoop, 03 hive, 04 spark)

sortWith 自定义排序 def sortWith(lt: (A,A) => Boolean): List[A]

scala> val a = List(2,3,1,6,4,5)
val a: List[Int] = List(2, 3, 1, 6, 4, 5)

scala> a.sortWith((x,y)=>x<y)
val res15: List[Int] = List(1, 2, 3, 4, 5, 6)

scala> a.sortWith((x,y)=>x>y)
val res16: List[Int] = List(6, 5, 4, 3, 2, 1)

scala> a.sortWith(_ > _)
val res17: List[Int] = List(6, 5, 4, 3, 2, 1)

6、分组 groupBy

def groupBy[K](f: (A) => K): Map[K, List[A]]


scala> val a = List("zhangsan"->"m","lisi"->"f","wangwu"->"m")
val a: List[(String, String)] = List((zhaan,m), (lisi,f), (wangwu,m))

scala> a.groupBy(x => x._2)
val res18: scala.collection.immutable.Map[String,List[(String, String)]] = HashMap(f -> List((lisi,f)), m -> List((zhaan,m), (wangwu,m)))

scala> res18.map(x=>x._1 -> x._2.size)
val res19: scala.collection.immutable.Map[String,Int] = HashMap(f -> 1, m -> 2)


7、聚合 reduce

reduce(reduceLeft) 与 reduceRight

def reduce[A1 >: A](op: (A1, A1) => A1): A1 A1是A的元素类型, op中的第一个A1表示op的结果值,第二个A1表示下个元素

scala> val a = List(1,2,3,4,5,6,7,8,9,10)
val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> a.reduce((x,y) => x+y)
val res24: Int = 55

scala> a.reduce(_ + _)
val res25: Int = 55

8、折叠 folt

def fold[A1 >: A](z: A1)(op: (A1, A1) => A1): A1
z:A1 是初始值
op:(A1,A1)=>A1 xx

scala> val a = List(1,2,3,4,5,6,7,8,9,10)
val a: List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

scala> a.fold(0)((x,y) => x+y)
val res28: Int = 55

scala> a.fold(100)((x,y) => x+y)
val res29: Int = 155

scala> a.fold(0)(_+_)
val res26: Int = 55

scala>

scala> a.fold(100)(_+_)
val res27: Int = 155