序
本文主要研究一下HttpClient的BackoffManager
BackoffManager
org/apache/http/client/BackoffManager.java
/**
* Represents a controller that dynamically adjusts the size
* of an available connection pool based on feedback from
* using the connections.
*
* @since 4.2
*
*/
public interface BackoffManager {
/**
* Called when we have decided that the result of
* using a connection should be interpreted as a
* backoff signal.
*/
public void backOff(HttpRoute route);
/**
* Called when we have determined that the result of
* using a connection has succeeded and that we may
* probe for more connections.
*/
public void probe(HttpRoute route);
}
BackoffManager接口主要用于根据connection的情况来动态调整连接池的connection大小,它定义了backOff方法用于缩小连接数,probe方法用于扩大连接数
AIMDBackoffManager
org/apache/http/impl/client/AIMDBackoffManager.java
/**
* <p>The {@code AIMDBackoffManager} applies an additive increase,
* multiplicative decrease (AIMD) to managing a dynamic limit to
* the number of connections allowed to a given host. You may want
* to experiment with the settings for the cooldown periods and the
* backoff factor to get the adaptive behavior you want.</p>
*
* <p>Generally speaking, shorter cooldowns will lead to more steady-state
* variability but faster reaction times, while longer cooldowns
* will lead to more stable equilibrium behavior but slower reaction
* times.</p>
*
* <p>Similarly, higher backoff factors promote greater
* utilization of available capacity at the expense of fairness
* among clients. Lower backoff factors allow equal distribution of
* capacity among clients (fairness) to happen faster, at the
* expense of having more server capacity unused in the short term.</p>
*
* @since 4.2
*/
public class AIMDBackoffManager implements BackoffManager {
private final ConnPoolControl<HttpRoute> connPerRoute;
private final Clock clock;
private final Map<HttpRoute,Long> lastRouteProbes;
private final Map<HttpRoute,Long> lastRouteBackoffs;
private long coolDown = 5 * 1000L;
private double backoffFactor = 0.5;
private int cap = 2; // Per RFC 2616 sec 8.1.4
/**
* Creates an {@code AIMDBackoffManager} to manage
* per-host connection pool sizes represented by the
* given {@link ConnPoolControl}.
* @param connPerRoute per-host routing maximums to
* be managed
*/
public AIMDBackoffManager(final ConnPoolControl<HttpRoute> connPerRoute) {
this(connPerRoute, new SystemClock());
}
AIMDBackoffManager(final ConnPoolControl<HttpRoute> connPerRoute, final Clock clock) {
this.clock = clock;
this.connPerRoute = connPerRoute;
this.lastRouteProbes = new HashMap<HttpRoute,Long>();
this.lastRouteBackoffs = new HashMap<HttpRoute,Long>();
}
@Override
public void backOff(final HttpRoute route) {
synchronized(connPerRoute) {
final int curr = connPerRoute.getMaxPerRoute(route);
final Long lastUpdate = getLastUpdate(lastRouteBackoffs, route);
final long now = clock.getCurrentTime();
if (now - lastUpdate.longValue() < coolDown) {
return;
}
connPerRoute.setMaxPerRoute(route, getBackedOffPoolSize(curr));
lastRouteBackoffs.put(route, Long.valueOf(now));
}
}
private int getBackedOffPoolSize(final int curr) {
if (curr <= 1) {
return 1;
}
return (int)(Math.floor(backoffFactor * curr));
}
@Override
public void probe(final HttpRoute route) {
synchronized(connPerRoute) {
final int curr = connPerRoute.getMaxPerRoute(route);
final int max = (curr >= cap) ? cap : curr + 1;
final Long lastProbe = getLastUpdate(lastRouteProbes, route);
final Long lastBackoff = getLastUpdate(lastRouteBackoffs, route);
final long now = clock.getCurrentTime();
if (now - lastProbe.longValue() < coolDown || now - lastBackoff.longValue() < coolDown) {
return;
}
connPerRoute.setMaxPerRoute(route, max);
lastRouteProbes.put(route, Long.valueOf(now));
}
}
private Long getLastUpdate(final Map<HttpRoute,Long> updates, final HttpRoute route) {
Long lastUpdate = updates.get(route);
if (lastUpdate == null) {
lastUpdate = Long.valueOf(0L);
}
return lastUpdate;
}
/**
* Sets the factor to use when backing off; the new
* per-host limit will be roughly the current max times
* this factor. {@code Math.floor} is applied in the
* case of non-integer outcomes to ensure we actually
* decrease the pool size. Pool sizes are never decreased
* below 1, however. Defaults to 0.5.
* @param d must be between 0.0 and 1.0, exclusive.
*/
public void setBackoffFactor(final double d) {
Args.check(d > 0.0 && d < 1.0, "Backoff factor must be 0.0 < f < 1.0");
backoffFactor = d;
}
/**
* Sets the amount of time, in milliseconds, to wait between
* adjustments in pool sizes for a given host, to allow
* enough time for the adjustments to take effect. Defaults
* to 5000L (5 seconds).
* @param l must be positive
*/
public void setCooldownMillis(final long l) {
Args.positive(coolDown, "Cool down");
coolDown = l;
}
/**
* Sets the absolute maximum per-host connection pool size to
* probe up to; defaults to 2 (the default per-host max).
* @param cap must be >= 1
*/
public void setPerHostConnectionCap(final int cap) {
Args.positive(cap, "Per host connection cap");
this.cap = cap;
}
}
AIMD即Additive Increase Multiplicative Decrease的缩写(
线性增长,指数回退
),TCP拥塞控制用的就是这种算法。线性增长就是在没有异常的场景下进行增长,遇到异常时执行指数回退。AIMDBackoffManager实现了BackoffManager接口,它维护了lastRouteBackoffs、lastRouteProbes,backOff的时候通过lastRouteBackoffs获取最近backOff的时间,然后判断时间差是否小于coolDown时间,若小于则不操作,否则更新maxPerRoute为getBackedOffPoolSize(
Math.floor(backoffFactor * curr)
)probe方法先获取最近probe及backOff的时间,若最近probe/backOff与当前时间差小于coolDown则不处理,否则更新maxPerRoute为
(curr >= cap) ? cap : curr + 1
,默认cap为2,还不能修改,这个配置已经过时了,应该把cap设置为用户最初设定的maxPerRoute
小结
HttpClient的BackoffManager接口主要用于根据connection的情况来动态调整连接池的connection大小,它定义了backOff方法用于缩小连接数,probe方法用于扩大连接数。其默认采用与TCP拥塞控制相同的AIMD算法,异常时缩小为backoffFactor * curr
,成功时扩大为(curr >= cap) ? cap : curr + 1
,默认cap为2,还不能修改,这个配置已经过时了,应该把cap设置为用户最初设定的maxPerRoute。
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。