从startup.sh文件,找到catalina.sh,然后找到启动类:org.apache.catalina.startup.Bootstrap
初始化阶段
1.Bootstrap的main方法,实例化自己,然后初始化一堆classloader,分别是commonLoader, serverLoader, sharedLoader,其中commonLoader在conf的ctalina.properties里配置了读取哪些jar包
common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar
而另外两个classloader是没有指定目录的,也就是没有目录让他们加载。
2.调用Bootstrap的start方法,然后初始化org.apache.catalina.startup.Catalina对象,然后设置其classloader成员变量为sharedLoader。
3.调用Catalina对象的start方法,这里就稍微复杂点。
public void start() {
if (getServer() == null) {
load();
}
if (getServer() == null) {
log.fatal("Cannot start server. Server instance is not configured.");
return;
}
long t1 = System.nanoTime();
// Start the new server
try {
getServer().start();
} catch (LifecycleException e) {
log.fatal(sm.getString("catalina.serverStartFail"), e);
try {
getServer().destroy();
} catch (LifecycleException e1) {
log.debug("destroy() failed for failed Server ", e1);
}
return;
}
long t2 = System.nanoTime();
if(log.isInfoEnabled()) {
log.info("Server startup in " + ((t2 - t1) / 1000000) + " ms");
}
// Register shutdown hook
if (useShutdownHook) {
if (shutdownHook == null) {
shutdownHook = new CatalinaShutdownHook();
}
Runtime.getRuntime().addShutdownHook(shutdownHook);
// If JULI is being used, disable JULI's shutdown hook since
// shutdown hooks run in parallel and log messages may be lost
// if JULI's hook completes before the CatalinaShutdownHook()
LogManager logManager = LogManager.getLogManager();
if (logManager instanceof ClassLoaderLogManager) {
((ClassLoaderLogManager) logManager).setUseShutdownHook(
false);
}
}
if (await) {
await();
stop();
}
}
使用了Digester工具,边读取server.xml,边实例化对象。实例化了org.apache.catalina.core.StandardServer对象,org.apache.catalina.deploy.NamingResources对象等等,Digester这工具可设置实例化默认class或读取xml标签中className属性来实例化对应Catalina对象的成员变量(还可以带层次的,如实例化Server对象里的Service成员变量)。如Server对象,就是默认org.apache.catalina.core.StandardServer对象。然后再实例化Server对象里的Service成员变量org.apache.catalina.core.StandardService。还有就是自定义实例化规则,如ConnectorCreateRule,就会直接将Connector的标签里protocol作为org.apache.catalina.connector.Connector#Connector(java.lang.String)的实例化参数,实例化org.apache.catalina.connector.Connector;
Digester这工具有四种功能,1.读取xml标签,实例化默认配置的class;2.读取xml标签中className属性来实例化;3.可以带层次的设置成员变量;4.自定实例化规则。
//ConnectorCreateRule.java
//ConnectorCreateRule,就会直接将Connector的标签里protocol作为org.apache.catalina.connector.Connector#Connector(java.lang.String)的实例化参数,实例化org.apache.catalina.connector.Connector;
@Override
public void begin(String namespace, String name, Attributes attributes)
throws Exception {
Service svc = (Service)digester.peek();
Executor ex = null;
if ( attributes.getValue("executor")!=null ) {
ex = svc.getExecutor(attributes.getValue("executor"));
}
Connector con = new Connector(attributes.getValue("protocol"));
if ( ex != null ) _setExecutor(con,ex);
digester.push(con);
}
还实例化org.apache.catalina.core.StandardThreadExecutor,这个Exector后续作为Connector处理ServerProcessor使用
第3点,实例化很多对象,当时为了便于理解主干,其他的先忽略。
4.实例化上面一堆对象后,会调用Server对象的init方法,但由于是继承org.apache.catalina.util.LifecycleBase,所以也就是调用LifecycleBase的init方法。
@Override
protected void initInternal() throws LifecycleException {
...这里还有一些代码,先忽略
// Initialize our defined Services
for (int i = 0; i < services.length; i++) {
services[i].init();
}
}
5.Server对象init也是调用一堆Service的init方法(意思说server.xml的Service标签是可以配置多个),很尴尬,Service的init方法又是调用LifecycleBase的init方法,这一块可能造成理解混乱,因为Server和Service名字很像,而且init方法又是调用LifecycleBase的init模版方法。
@Override
protected void initInternal() throws LifecycleException {
...这里还有一些代码,先忽略
// Initialize our defined Connectors
synchronized (connectorsLock) {
for (Connector connector : connectors) {
try {
connector.init();
} catch (Exception e) {
String message = sm.getString(
"standardService.connector.initFailed", connector);
log.error(message, e);
if (Boolean.getBoolean("org.apache.catalina.startup.EXIT_ON_INIT_FAILURE"))
throw new LifecycleException(message);
}
}
}
}
- Service对象调用Connector的init方法。也是一样调用LifecycleBase的init模版方法。
@Override
protected void initInternal() throws LifecycleException {
super.initInternal();
// Initialize adapter
adapter = new CoyoteAdapter(this);
protocolHandler.setAdapter(adapter);
// Make sure parseBodyMethodsSet has a default
if (null == parseBodyMethodsSet) {
setParseBodyMethods(getParseBodyMethods());
}
if (protocolHandler.isAprRequired() &&
!AprLifecycleListener.isAprAvailable()) {
throw new LifecycleException(
sm.getString("coyoteConnector.protocolHandlerNoApr",
getProtocolHandlerClassName()));
}
try {
protocolHandler.init();
} catch (Exception e) {
throw new LifecycleException(
sm.getString("coyoteConnector.protocolHandlerInitializationFailed"), e);
}
// Initialize mapper listener
mapperListener.init();
}
这里就调用protocolHandler的init方法。protocolHandler从哪里来的呢?就第3点说的,自定义实例化规则ConnectorCreateRule里,将Connector标签的protocol属性作为Connector构造方法参数
public Connector(String protocol) {
setProtocol(protocol);
// Instantiate protocol handler
try {
Class<?> clazz = Class.forName(protocolHandlerClassName);
this.protocolHandler = (ProtocolHandler) clazz.getDeclaredConstructor().newInstance();
} catch (Exception e) {
log.error(sm.getString(
"coyoteConnector.protocolHandlerInstantiationFailed"), e);
}
// Default for Connector depends on this (deprecated) system property
if (Boolean.parseBoolean(System.getProperty("org.apache.tomcat.util.buf.UDecoder.ALLOW_ENCODED_SLASH", "false"))) {
encodedSolidusHandling = EncodedSolidusHandling.DECODE;
}
}
7.protocolHandler的init方法,是在org.apache.coyote.AbstractProtocol#init
@Override
public void init() throws Exception {
...这里还有一些代码,先忽略
String endpointName = getName();
endpoint.setName(endpointName.substring(1, endpointName.length()-1));
try {
endpoint.init();
} catch (Exception ex) {
getLog().error(sm.getString("abstractProtocolHandler.initError",
getName()), ex);
throw ex;
}
}
这里我们用org.apache.coyote.http11.Http11NioProtocol这个Protocal继续
public Http11NioProtocol() {
endpoint=new NioEndpoint();
cHandler = new Http11ConnectionHandler(this);
((NioEndpoint) endpoint).setHandler(cHandler);
setSoLinger(Constants.DEFAULT_CONNECTION_LINGER);
setSoTimeout(Constants.DEFAULT_CONNECTION_TIMEOUT);
setTcpNoDelay(Constants.DEFAULT_TCP_NO_DELAY);
}
8.protocolHandler的init方法,调用NioEndpoint的init方法. NioEndpoint继承org.apache.tomcat.util.net.AbstractEndpoint的init模版方法
public final void init() throws Exception {
testServerCipherSuitesOrderSupport();
if (bindOnInit) {
bind();
bindState = BindState.BOUND_ON_INIT;
}
}
9.bind方法就是交给子类实现,我们看org.apache.tomcat.util.net.NioEndpoint#bind的实现
@Override
public void bind() throws Exception {
serverSock = ServerSocketChannel.open();
socketProperties.setProperties(serverSock.socket());
InetSocketAddress addr = (getAddress()!=null?new InetSocketAddress(getAddress(),getPort()):new InetSocketAddress(getPort()));
serverSock.socket().bind(addr,getBacklog());
serverSock.configureBlocking(true); //mimic APR behavior
if (getSocketProperties().getSoTimeout() >= 0) {
serverSock.socket().setSoTimeout(getSocketProperties().getSoTimeout());
}
// Initialize thread count defaults for acceptor, poller
if (acceptorThreadCount == 0) {
// FIXME: Doesn't seem to work that well with multiple accept threads
acceptorThreadCount = 1;
}
if (pollerThreadCount <= 0) {
//minimum one poller thread
pollerThreadCount = 1;
}
stopLatch = new CountDownLatch(pollerThreadCount);
// Initialize SSL if needed
if (isSSLEnabled()) {
SSLUtil sslUtil = handler.getSslImplementation().getSSLUtil(this);
sslContext = sslUtil.createSSLContext();
sslContext.init(wrap(sslUtil.getKeyManagers()),
sslUtil.getTrustManagers(), null);
SSLSessionContext sessionContext =
sslContext.getServerSessionContext();
if (sessionContext != null) {
sslUtil.configureSessionContext(sessionContext);
}
// Determine which cipher suites and protocols to enable
enabledCiphers = sslUtil.getEnableableCiphers(sslContext);
enabledProtocols = sslUtil.getEnableableProtocols(sslContext);
}
if (oomParachute>0) reclaimParachute(true);
selectorPool.open();
}
这里我们看到使用了实例化了ServerSocketChannel,并设置为阻塞模式。但没看到ServerSocketChannel注册Selector。不过调用了org.apache.tomcat.util.net.NioSelectorPool#open来实例化Selector,但没注册
protected Selector getSharedSelector() throws IOException {
if (SHARED && SHARED_SELECTOR == null) {
synchronized ( NioSelectorPool.class ) {
if ( SHARED_SELECTOR == null ) {
synchronized (Selector.class) {
// Selector.open() isn't thread safe
// http://bugs.sun.com/view_bug.do?bug_id=6427854
// Affects 1.6.0_29, fixed in 1.7.0_01
SHARED_SELECTOR = Selector.open();
}
log.info("Using a shared selector for servlet write/read");
}
}
}
return SHARED_SELECTOR;
}
public void open() throws IOException {
enabled = true;
getSharedSelector();
if (SHARED) {
blockingSelector = new NioBlockingSelector();
blockingSelector.open(getSharedSelector());
}
}
然后将Selector赋值给NioBlockingSelector成员变量.
public void open(Selector selector) {
sharedSelector = selector;
poller = new BlockPoller();
poller.selector = sharedSelector;
poller.setDaemon(true);
poller.setName("NioBlockingSelector.BlockPoller-"+(threadCounter.getAndIncrement()));
poller.start();
}
BlockPoller是个线程对象
protected static class BlockPoller extends Thread {
protected volatile boolean run = true;
protected Selector selector = null;
protected ConcurrentLinkedQueue<Runnable> events = new ConcurrentLinkedQueue<Runnable>();
}
维护了一个队列events,用于
NioEndPoint --> NioSelectorPool --> NioBlockingSelector --> BlockPoller
Catalina对象的start方法中初始化部分的中,相关主干已经初始化完毕。
下一步,执行getServer().start();方法
启动阶段
1.Server的start方法,又开始一轮的org.apache.catalina.util.LifecycleBase#start模版方法。
@Override
protected void startInternal() throws LifecycleException {
fireLifecycleEvent(CONFIGURE_START_EVENT, null);
setState(LifecycleState.STARTING);
globalNamingResources.start();
// Start our defined Services
synchronized (servicesLock) {
for (int i = 0; i < services.length; i++) {
services[i].start();
}
}
}
2.Service的start方法(org.apache.catalina.util.LifecycleBase#start模版方法)
@Override
protected void startInternal() throws LifecycleException {
...这里还有一些代码,先忽略
// Start our defined Connectors second
synchronized (connectorsLock) {
for (Connector connector: connectors) {
try {
// If it has already failed, don't try and start it
if (connector.getState() != LifecycleState.FAILED) {
connector.start();
}
} catch (Exception e) {
log.error(sm.getString(
"standardService.connector.startFailed",
connector), e);
}
}
}
}
3.调用Connector的start方法(org.apache.catalina.util.LifecycleBase#start模版方法)
@Override
protected void startInternal() throws LifecycleException {
// Validate settings before starting
if (getPort() < 0) {
throw new LifecycleException(sm.getString(
"coyoteConnector.invalidPort", Integer.valueOf(getPort())));
}
setState(LifecycleState.STARTING);
try {
protocolHandler.start();
} catch (Exception e) {
String errPrefix = "";
if(this.service != null) {
errPrefix += "service.getName(): \"" + this.service.getName() + "\"; ";
}
throw new LifecycleException
(errPrefix + " " + sm.getString
("coyoteConnector.protocolHandlerStartFailed"), e);
}
mapperListener.start();
}
4.protocolHandler.start(),但这里不是调用org.apache.catalina.util.LifecycleBase#start模版方法,而是调用org.apache.coyote.AbstractProtocol#start的方法。
@Override
public void start() throws Exception {
if (getLog().isInfoEnabled())
getLog().info(sm.getString("abstractProtocolHandler.start",
getName()));
try {
endpoint.start();
} catch (Exception ex) {
getLog().error(sm.getString("abstractProtocolHandler.startError",
getName()), ex);
throw ex;
}
}
5.endpoint.start()调用了org.apache.tomcat.util.net.AbstractEndpoint#start方法
public final void start() throws Exception {
if (bindState == BindState.UNBOUND) {
bind();
bindState = BindState.BOUND_ON_START;
}
startInternal();
}
这里的bind已经init阶段调用过,所以这里不会调用,而是继续startInternal方法
6.startInternal方法是交给子类实现,这里是org.apache.tomcat.util.net.NioEndpoint#startInternal
@Override
public void startInternal() throws Exception {
if (!running) {
running = true;
paused = false;
// Create worker collection
if ( getExecutor() == null ) {
createExecutor();
}
initializeConnectionLatch();
// Start poller threads
pollers = new Poller[getPollerThreadCount()];
for (int i=0; i<pollers.length; i++) {
pollers[i] = new Poller();
Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-"+i);
pollerThread.setPriority(threadPriority);
pollerThread.setDaemon(true);
pollerThread.start();
}
startAcceptorThreads();
}
}
创建org.apache.tomcat.util.net.NioEndpoint.Acceptor来循环接收连接。countUpOrAwaitConnection的方法就是用于判断是否继续从OS队列中获取连接。里面的数量就是由maxConnections配置的,如果在BIO情况下,maxConnections=maxThreads,所以等价于当工作线程都处于繁忙时,则acceptor会等待工作线程空闲才会去获取来连接。而在NIO情况下,maxConnections默认等于100000,则不会等工作线程繁忙,而是继续从OS队列中获取连接,放在events队列中.
@Override
public void run() {
int errorDelay = 0;
// Loop until we receive a shutdown command
while (running) {
// Loop if endpoint is paused
while (paused && running) {
state = AcceptorState.PAUSED;
try {
Thread.sleep(50);
} catch (InterruptedException e) {
// Ignore
}
}
if (!running) {
break;
}
state = AcceptorState.RUNNING;
try {
//if we have reached max connections, wait
countUpOrAwaitConnection();
SocketChannel socket = null;
try {
// Accept the next incoming connection from the server
// socket
socket = serverSock.accept();
} catch (IOException ioe) {
//we didn't get a socket
countDownConnection();
// Introduce delay if necessary
errorDelay = handleExceptionWithDelay(errorDelay);
// re-throw
throw ioe;
}
// Successful accept, reset the error delay
errorDelay = 0;
// setSocketOptions() will add channel to the poller
// if successful
if (running && !paused) {
if (!setSocketOptions(socket)) {
countDownConnection();
closeSocket(socket);
}
} else {
countDownConnection();
closeSocket(socket);
}
} catch (SocketTimeoutException sx) {
// Ignore: Normal condition
} catch (IOException x) {
if (running) {
log.error(sm.getString("endpoint.accept.fail"), x);
}
} catch (OutOfMemoryError oom) {
try {
oomParachuteData = null;
releaseCaches();
log.error("", oom);
}catch ( Throwable oomt ) {
try {
try {
System.err.println(oomParachuteMsg);
oomt.printStackTrace();
}catch (Throwable letsHopeWeDontGetHere){
ExceptionUtils.handleThrowable(letsHopeWeDontGetHere);
}
}catch (Throwable letsHopeWeDontGetHere){
ExceptionUtils.handleThrowable(letsHopeWeDontGetHere);
}
}
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
log.error(sm.getString("endpoint.accept.fail"), t);
}
}
state = AcceptorState.ENDED;
}
每接收到连接,也就是socket,会调用org.apache.tomcat.util.net.NioEndpoint#setSocketOptions方法来处理socket。
protected boolean setSocketOptions(SocketChannel socket) {
// Process the connection
try {
//disable blocking, APR style, we are gonna be polling it
socket.configureBlocking(false);
Socket sock = socket.socket();
socketProperties.setProperties(sock);
NioChannel channel = nioChannels.poll();
if ( channel == null ) {
// SSL setup
if (sslContext != null) {
SSLEngine engine = createSSLEngine();
int appbufsize = engine.getSession().getApplicationBufferSize();
NioBufferHandler bufhandler = new NioBufferHandler(Math.max(appbufsize,socketProperties.getAppReadBufSize()),
Math.max(appbufsize,socketProperties.getAppWriteBufSize()),
socketProperties.getDirectBuffer());
channel = new SecureNioChannel(socket, engine, bufhandler, selectorPool);
} else {
// normal tcp setup
NioBufferHandler bufhandler = new NioBufferHandler(socketProperties.getAppReadBufSize(),
socketProperties.getAppWriteBufSize(),
socketProperties.getDirectBuffer());
channel = new NioChannel(socket, bufhandler);
}
} else {
channel.setIOChannel(socket);
if ( channel instanceof SecureNioChannel ) {
SSLEngine engine = createSSLEngine();
((SecureNioChannel)channel).reset(engine);
} else {
channel.reset();
}
}
getPoller0().register(channel);
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
try {
log.error("",t);
} catch (Throwable tt) {
ExceptionUtils.handleThrowable(tt);
}
// Tell to close the socket
return false;
}
return true;
}
注意到,这里socket设置了非阻塞,然后实例化NioChannel,然后注册到Poller里。Poller的注册方法:
public void register(final NioChannel socket) {
socket.setPoller(this);
KeyAttachment key = keyCache.poll();
final KeyAttachment ka = key!=null?key:new KeyAttachment(socket);
ka.reset(this,socket,getSocketProperties().getSoTimeout());
ka.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
ka.setSecure(isSSLEnabled());
PollerEvent r = eventCache.poll();
ka.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
else r.reset(socket,ka,OP_REGISTER);
addEvent(r);
}
创建一个监听READ的事件。注册到Poller里的Selector上。是不是看出叻,实际在PollerEvent里完成叻,虽然看去好像OP_REGISTER,NIO好像没有这事件,其实在PollerEvent会将OP_REGISTER转为SelectionKey.OP_READ。
*/
public static class PollerEvent implements Runnable {
protected NioChannel socket;
protected int interestOps;
protected KeyAttachment key;
public PollerEvent(NioChannel ch, KeyAttachment k, int intOps) {
reset(ch, k, intOps);
}
public void reset(NioChannel ch, KeyAttachment k, int intOps) {
socket = ch;
interestOps = intOps;
key = k;
}
public void reset() {
reset(null, null, 0);
}
@Override
public void run() {
if ( interestOps == OP_REGISTER ) {
try {
socket.getIOChannel().register(socket.getPoller().getSelector(), SelectionKey.OP_READ, key);
} catch (Exception x) {
log.error("", x);
}
} else {
final SelectionKey key = socket.getIOChannel().keyFor(socket.getPoller().getSelector());
try {
if (key == null) {
// The key was cancelled (e.g. due to socket closure)
// and removed from the selector while it was being
// processed. Count down the connections at this point
// since it won't have been counted down when the socket
// closed.
socket.getPoller().getEndpoint().countDownConnection();
} else {
final KeyAttachment att = (KeyAttachment) key.attachment();
if ( att!=null ) {
//handle callback flag
if (att.isComet() && (interestOps & OP_CALLBACK) == OP_CALLBACK ) {
att.setCometNotify(true);
} else {
att.setCometNotify(false);
}
interestOps = (interestOps & (~OP_CALLBACK));//remove the callback flag
att.access();//to prevent timeout
//we are registering the key to start with, reset the fairness counter.
int ops = key.interestOps() | interestOps;
att.interestOps(ops);
key.interestOps(ops);
} else {
socket.getPoller().cancelledKey(key, SocketStatus.ERROR, false);
}
}
} catch (CancelledKeyException ckx) {
try {
socket.getPoller().cancelledKey(key, SocketStatus.DISCONNECT, true);
} catch (Exception ignore) {}
}
}//end if
}//run
@Override
public String toString() {
return super.toString()+"[intOps="+this.interestOps+"]";
}
}
由于 socket.setPoller(this);设置poller,所以socker可以注册poller里selector。
而Poller是线程,由上面org.apache.tomcat.util.net.NioEndpoint#startInternal实例化了好几个,也就是运行时看到的线程名:http-nio-8080-ClientPoller-1
/**
* Poller class.
*/
public class Poller implements Runnable {
protected Selector selector;
protected ConcurrentLinkedQueue<Runnable> events = new ConcurrentLinkedQueue<Runnable>();
protected volatile boolean close = false;
protected long nextExpiration = 0;//optimize expiration handling
protected AtomicLong wakeupCounter = new AtomicLong(0l);
protected volatile int keyCount = 0;
public Poller() throws IOException {
synchronized (Selector.class) {
// Selector.open() isn't thread safe
// http://bugs.sun.com/view_bug.do?bug_id=6427854
// Affects 1.6.0_29, fixed in 1.7.0_01
this.selector = Selector.open();
}
}
...
public boolean events() {
boolean result = false;
Runnable r = null;
for (int i = 0, size = events.size(); i < size && (r = events.poll()) != null; i++ ) {
result = true;
try {
r.run();
if ( r instanceof PollerEvent ) {
((PollerEvent)r).reset();
eventCache.offer((PollerEvent)r);
}
} catch ( Throwable x ) {
log.error("",x);
}
}
return result;
}
public void register(final NioChannel socket) {
socket.setPoller(this);
KeyAttachment key = keyCache.poll();
final KeyAttachment ka = key!=null?key:new KeyAttachment(socket);
ka.reset(this,socket,getSocketProperties().getSoTimeout());
ka.setKeepAliveLeft(NioEndpoint.this.getMaxKeepAliveRequests());
ka.setSecure(isSSLEnabled());
PollerEvent r = eventCache.poll();
ka.interestOps(SelectionKey.OP_READ);//this is what OP_REGISTER turns into.
if ( r==null) r = new PollerEvent(socket,ka,OP_REGISTER);
else r.reset(socket,ka,OP_REGISTER);
addEvent(r);
}
public KeyAttachment cancelledKey(SelectionKey key, SocketStatus status, boolean dispatch) {
KeyAttachment ka = null;
try {
if ( key == null ) return null;//nothing to do
ka = (KeyAttachment) key.attachment();
if (ka != null && ka.isComet() && status != null) {
//the comet event takes care of clean up
//processSocket(ka.getChannel(), status, dispatch);
ka.setComet(false);//to avoid a loop
if (status == SocketStatus.TIMEOUT ) {
if (processSocket(ka.getChannel(), status, true)) {
return null; // don't close on comet timeout
}
} else {
// Don't dispatch if the lines below are cancelling the key
processSocket(ka.getChannel(), status, false);
}
}
ka = (KeyAttachment) key.attach(null);
if (ka!=null) handler.release(ka);
else handler.release((SocketChannel)key.channel());
if (key.isValid()) key.cancel();
// If it is available, close the NioChannel first which should
// in turn close the underlying SocketChannel. The NioChannel
// needs to be closed first, if available, to ensure that TLS
// connections are shut down cleanly.
if (ka != null) {
try {
ka.getSocket().close(true);
} catch (Exception e){
if (log.isDebugEnabled()) {
log.debug(sm.getString(
"endpoint.debug.socketCloseFail"), e);
}
}
}
// The SocketChannel is also available via the SelectionKey. If
// it hasn't been closed in the block above, close it now.
if (key.channel().isOpen()) {
try {
key.channel().close();
} catch (Exception e) {
if (log.isDebugEnabled()) {
log.debug(sm.getString(
"endpoint.debug.channelCloseFail"), e);
}
}
}
try {
if (ka != null && ka.getSendfileData() != null
&& ka.getSendfileData().fchannel != null
&& ka.getSendfileData().fchannel.isOpen()) {
ka.getSendfileData().fchannel.close();
}
} catch (Exception ignore) {
}
if (ka!=null) {
ka.reset();
countDownConnection();
}
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
if (log.isDebugEnabled()) log.error("",e);
}
return ka;
}
/**
* The background thread that listens for incoming TCP/IP connections and
* hands them off to an appropriate processor.
*/
@Override
public void run() {
// Loop until destroy() is called
while (true) {
try {
// Loop if endpoint is paused
while (paused && (!close) ) {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// Ignore
}
}
boolean hasEvents = false;
// Time to terminate?
if (close) {
events();
timeout(0, false);
try {
selector.close();
} catch (IOException ioe) {
log.error(sm.getString(
"endpoint.nio.selectorCloseFail"), ioe);
}
break;
} else {
hasEvents = events();
}
try {
if ( !close ) {
if (wakeupCounter.getAndSet(-1) > 0) {
//if we are here, means we have other stuff to do
//do a non blocking select
keyCount = selector.selectNow();
} else {
keyCount = selector.select(selectorTimeout);
}
wakeupCounter.set(0);
}
if (close) {
events();
timeout(0, false);
try {
selector.close();
} catch (IOException ioe) {
log.error(sm.getString(
"endpoint.nio.selectorCloseFail"), ioe);
}
break;
}
} catch ( NullPointerException x ) {
//sun bug 5076772 on windows JDK 1.5
if ( log.isDebugEnabled() ) log.debug("Possibly encountered sun bug 5076772 on windows JDK 1.5",x);
if ( wakeupCounter == null || selector == null ) throw x;
continue;
} catch ( CancelledKeyException x ) {
//sun bug 5076772 on windows JDK 1.5
if ( log.isDebugEnabled() ) log.debug("Possibly encountered sun bug 5076772 on windows JDK 1.5",x);
if ( wakeupCounter == null || selector == null ) throw x;
continue;
} catch (Throwable x) {
ExceptionUtils.handleThrowable(x);
log.error("",x);
continue;
}
//either we timed out or we woke up, process events first
if ( keyCount == 0 ) hasEvents = (hasEvents | events());
Iterator<SelectionKey> iterator =
keyCount > 0 ? selector.selectedKeys().iterator() : null;
// Walk through the collection of ready keys and dispatch
// any active event.
while (iterator != null && iterator.hasNext()) {
SelectionKey sk = iterator.next();
KeyAttachment attachment = (KeyAttachment)sk.attachment();
// Attachment may be null if another thread has called
// cancelledKey()
if (attachment == null) {
iterator.remove();
} else {
attachment.access();
iterator.remove();
processKey(sk, attachment);
}
}//while
//process timeouts
timeout(keyCount,hasEvents);
if ( oomParachute > 0 && oomParachuteData == null ) checkParachute();
} catch (OutOfMemoryError oom) {
try {
oomParachuteData = null;
releaseCaches();
log.error("", oom);
}catch ( Throwable oomt ) {
try {
System.err.println(oomParachuteMsg);
oomt.printStackTrace();
}catch (Throwable letsHopeWeDontGetHere){
ExceptionUtils.handleThrowable(letsHopeWeDontGetHere);
}
}
}
}//while
stopLatch.countDown();
}
}
Poller的run方法,监听SelectionKey,然后交给org.apache.tomcat.util.net.NioEndpoint.Poller#processKey处理
protected boolean processKey(SelectionKey sk, KeyAttachment attachment) {
boolean result = true;
try {
if ( close ) {
cancelledKey(sk, SocketStatus.STOP, attachment.comet);
} else if ( sk.isValid() && attachment != null ) {
attachment.access();//make sure we don't time out valid sockets
sk.attach(attachment);//cant remember why this is here
NioChannel channel = attachment.getChannel();
if (sk.isReadable() || sk.isWritable() ) {
if ( attachment.getSendfileData() != null ) {
processSendfile(sk,attachment, false);
} else {
if ( isWorkerAvailable() ) {
unreg(sk, attachment, sk.readyOps());
boolean closeSocket = false;
// Read goes before write
if (sk.isReadable()) {
if (!processSocket(channel, SocketStatus.OPEN_READ, true)) {
closeSocket = true;
}
}
if (!closeSocket && sk.isWritable()) {
if (!processSocket(channel, SocketStatus.OPEN_WRITE, true)) {
closeSocket = true;
}
}
if (closeSocket) {
cancelledKey(sk,SocketStatus.DISCONNECT,false);
}
} else {
result = false;
}
}
}
} else {
//invalid key
cancelledKey(sk, SocketStatus.ERROR,false);
}
} catch ( CancelledKeyException ckx ) {
cancelledKey(sk, SocketStatus.ERROR,false);
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
log.error("",t);
}
return result;
}
这里有个关键知识点,processSendfile实现了零拷贝,可参考https://www.ibm.com/developer...
if ( attachment.getSendfileData() != null ) {
processSendfile(sk,attachment, false);
}
public SendfileState processSendfile(SelectionKey sk, KeyAttachment attachment,
boolean calledByProcessor) {
long written = sd.fchannel.transferTo(sd.pos,sd.length,wc);
...
}
如果不是文件处理的socket则会走org.apache.tomcat.util.net.NioEndpoint#processSocket这条路。
public boolean processSocket(NioChannel socket, SocketStatus status, boolean dispatch) {
try {
KeyAttachment attachment = (KeyAttachment)socket.getAttachment();
if (attachment == null) {
return false;
}
attachment.setCometNotify(false); //will get reset upon next reg
SocketProcessor sc = processorCache.poll();
if ( sc == null ) sc = new SocketProcessor(socket,status);
else sc.reset(socket,status);
if ( dispatch && getExecutor()!=null ) getExecutor().execute(sc);
else sc.run();
} catch (RejectedExecutionException rx) {
log.warn("Socket processing request was rejected for:"+socket,rx);
return false;
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
// This means we got an OOM or similar creating a thread, or that
// the pool and its queue are full
log.error(sm.getString("endpoint.process.fail"), t);
return false;
}
return true;
}
这里就会实例化SocketProcessor或复用SocketProcessor实例,SocketProcessor是一个Runnable,所以可以交给线程池去处理,也就是最上面init阶段初始化的org.apache.catalina.core.StandardThreadExecutor。
在这里理下Connector的start会启动Acceptor线程,也就是常看到的http-nio-8080-Acceptor-0,一般只有一个,代码里的注释也写着
// Initialize thread count default for acceptor
if (acceptorThreadCount == 0) {
// FIXME: Doesn't seem to work that well with multiple accept threads
acceptorThreadCount = 1;
}
多个accept线程看不出有更好。accept负责监听连接,当有连接过后,会封装NioChannel,然后增加Event到Poller监听的events队列里。是有多个Poller,每个Poller有自己的events对列,那么accept会将NioChannel注册到哪个Poller呢?是这样的轮训算法:
/**
* Return an available poller in true round robin fashion
*/
public Poller getPoller0() {
int idx = Math.abs(pollerRotater.incrementAndGet()) % pollers.length;
return pollers[idx];
}
注册到Poller后,这个注册实际就会将socker与该Poller的Selector绑定,监听READ事件。
后续Poller就可以监听SelectionKey来处理了。
Poller获取SelectionKey会生成SocketProcessor交给StandardThreadExecutor线程池来执行。属性是在Catalina里由Digester设置规则org.apache.catalina.startup.SetAllPropertiesRule设置线程池的属性,如maxThreads最大线程数.
此时Catalina大概就算start阶段结束了。其实到accept和poller创建完,start就算结束了
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。