国内最全IT社区平台 联系我们 | 收藏本站
华晨云阿里云优惠2
您当前位置:首页 > 服务器 > 第17课:Spark Streaming资源动态申请和动态控制消费速率原理剖析

第17课:Spark Streaming资源动态申请和动态控制消费速率原理剖析

来源:程序员人生   发布时间:2016-06-24 17:54:39 阅读次数:2608次

为何需要动态?
a) Spark默许情况下粗粒度的,先分配好资源再计算。对Spark Streaming而言有高峰值和低峰值,但是他们需要的资源是不1样的,如果依照高峰值的角度的话,就会有大量的资源浪费。
b) Spark Streaming不断的运行,对资源消耗和管理也是我们要斟酌的因素。
Spark Streaming资源动态调剂的时候会面临挑战:
Spark Streaming是依照Batch Duration运行的,Batch Duration需要很多资源,下1次Batch Duration就不需要那末多资源了,调剂资源的时候还没调剂完Batch Duration运行就已过期了。这个时候调剂时间间隔。

Spark Streaming资源动态申请
1. 在SparkContext中默许是不开启动态资源分配的,但是可以通过手动在SparkConf中配置。

// Optionally scale number of executors dynamically based on workload. Exposed for testing. val dynamicAllocationEnabled = Utils.isDynamicAllocationEnabled(_conf) if (!dynamicAllocationEnabled && //参数配置是不是开启资源动态分配 _conf.getBoolean("spark.dynamicAllocation.enabled", false)) { logWarning("Dynamic Allocation and num executors both set, thus dynamic allocation disabled.") } _executorAllocationManager = if (dynamicAllocationEnabled) { Some(new ExecutorAllocationManager(this, listenerBus, _conf)) } else { None } _executorAllocationManager.foreach(_.start())
2.  ExecutorAllocationManager: 有定时器会不断的去扫描Executor的情况,正在运行的Stage,要运行在不同的Executor中,要末增加Executor或减少。
3.  ExecutorAllocationManager中schedule方法会被周期性触发进行资源动态调剂。
/** * This is called at a fixed interval to regulate the number of pending executor requests * and number of executors running. * * First, adjust our requested executors based on the add time and our current needs. * Then, if the remove time for an existing executor has expired, kill the executor. * * This is factored out into its own method for testing. */ private def schedule(): Unit = synchronized { val now = clock.getTimeMillis updateAndSyncNumExecutorsTarget(now) removeTimes.retain { case (executorId, expireTime) => val expired = now >= expireTime if (expired) { initializing = false removeExecutor(executorId) } !expired } }
4.  在ExecutorAllocationManager中会在线程池中定时器会不断的运行schedule.
/** * Register for scheduler callbacks to decide when to add and remove executors, and start * the scheduling task. */ def start(): Unit = { listenerBus.addListener(listener) val scheduleTask = new Runnable() { override def run(): Unit = { try { schedule() } catch { case ct: ControlThrowable => throw ct case t: Throwable => logWarning(s"Uncaught exception in thread ${Thread.currentThread().getName}", t) } } } // intervalMillis定时器触发时间 executor.scheduleAtFixedRate(scheduleTask, 0, intervalMillis, TimeUnit.MILLISECONDS) }

动态控制消费速率:
Spark Streaming提供了1种弹性机制,流进来的速度和处理速度的关系,是不是来得及处理数据。如果不能来得及的话,他会自动动态控制数据流进来的速度,spark.streaming.backpressure.enabled参数设置。

本课程笔记来源于:
这里写图片描述

生活不易,码农辛苦
如果您觉得本网站对您的学习有所帮助,可以手机扫描二维码进行捐赠
程序员人生
------分隔线----------------------------
分享到:
------分隔线----------------------------
关闭
程序员人生