因为这个库确实好用,而且也没什么难度,所以就想写写,结果这两天网站挂了,耽误了好久,但愿今天晚上能够写完……
不过这几天工作需要也看了下,应该会比较快
这次从put content provider
先看一下官方的用法
Send content with Content provider
const uint64_t DATA_CHUNK_SIZE = 4;
svr.Get("stream", [&](const Request &req, Response &res) {
auto data = new std::string("abcdefg");
res.set_content_provider(
data->size(), // Content length
[data](uint64_t offset, uint64_t length, DataSink &sink) {
const auto &d = *data;
sink.write(&d[offset], std::min(length, DATA_CHUNK_SIZE));
},
[data] { delete data; });
});
这里的lambda表达式,是一个ContentProvider, 看了一个它的定义是一个functional
using ContentProvider =
std::function<void(size_t offset, size_t length, DataSink &sink)>;
用法是很明显的,它会从offset 写入length的数据,到sink里面,这里的sink有点类似一个数据流的概念,流的承接对象,不过说实话它这个lambda写得挺绕的…… data = new string ,不过这个是为了容易解释后面那个lambda 如何释放的对象。写入数据就是每次从offset 写入min(length, DATA_CHUNK_SIZE) 也就是每次写入不大于DATA_CHUNK_SIZE。至于offset 和length 会在下面的代码里面进行更新。这个的Server代码因为涉及到线程池不太直观,在Client找到了类似的用法
std::shared_ptr<Response>
Client::Put(const char *path, const Headers &headers, size_t content_length,
ContentProvider content_provider, const char *content_type) {
return send_with_content_provider("PUT", path, headers, std::string(),
content_length, content_provider,
content_type);
}
std::shared_ptr<Response> Client::send_with_content_provider(
const char *method, const char *path, const Headers &headers,
const std::string &body, size_t content_length,
ContentProvider content_provider, const char *content_type) {
Request req;
req.method = method;
req.headers = headers;
req.path = path;
req.headers.emplace("Content-Type", content_type);
{
if (content_provider) {
req.content_length = content_length;
req.content_provider = content_provider;
} else {
req.body = body;
}
}
auto res = std::make_shared<Response>();
return send(req, *res) ? res : nullptr;
}
provider 已经赋值给了req 下面就是最后的调用了
bool Client::send(const Request &req, Response &res) {
auto sock = create_client_socket();
if (sock == INVALID_SOCKET) { return false; }
#ifdef CPPHTTPLIB_OPENSSL_SUPPORT
if (is_ssl() && !proxy_host_.empty()) {
bool error;
if (!connect(sock, res, error)) { return error; }
}
#endif
return process_and_close_socket(
sock, 1, [&](Stream &strm, bool last_connection, bool &connection_close) {
return handle_request(strm, req, res, last_connection,
connection_close);
});
}
bool Client::handle_request(Stream &strm, const Request &req,
Response &res, bool last_connection,
bool &connection_close) {
if (req.path.empty()) { return false; }
bool ret;
if (!is_ssl() && !proxy_host_.empty()) {
auto req2 = req;
req2.path = "http://" + host_and_port_ + req.path;
ret = process_request(strm, req2, res, last_connection, connection_close);
} else {
ret = process_request(strm, req, res, last_connection, connection_close);
}
if (!ret) { return false; }
if (300 < res.status && res.status < 400 && follow_location_) {
ret = redirect(req, res);
}
return ret;
}
bool Client::process_request(Stream &strm, const Request &req,
Response &res, bool last_connection,
bool &connection_close) {
// Send request
if (!write_request(strm, req, last_connection)) { return false; }
// Receive response and headers
if (!read_response_line(strm, res) ||
!detail::read_headers(strm, res.headers)) {
return false;
}
if (res.get_header_value("Connection") == "close" ||
res.version == "HTTP/1.0") {
connection_close = true;
}
if (req.response_handler) {
if (!req.response_handler(res)) { return false; }
}
// Body
if (req.method != "HEAD" && req.method != "CONNECT") {
ContentReceiver out = [&](const char *buf, size_t n) {
if (res.body.size() + n > res.body.max_size()) { return false; }
res.body.append(buf, n);
return true;
};
if (req.content_receiver) {
out = [&](const char *buf, size_t n) {
return req.content_receiver(buf, n);
};
}
int dummy_status;
if (!detail::read_content(strm, res, (std::numeric_limits<size_t>::max)(),
dummy_status, req.progress, out)) {
return false;
}
}
// Log
if (logger_) { logger_(req, res); }
return true;
}
最后一层的调用链
bool Client::write_request(Stream &strm, const Request &req,
bool last_connection) {
detail::BufferStream bstrm;
// Request line
const auto &path = detail::encode_url(req.path);
bstrm.write_format("%s %s HTTP/1.1\r\n", req.method.c_str(), path.c_str());
// Additonal headers
Headers headers;
if (last_connection) { headers.emplace("Connection", "close"); }
if (!req.has_header("Host")) {
if (is_ssl()) {
if (port_ == 443) {
headers.emplace("Host", host_);
} else {
headers.emplace("Host", host_and_port_);
}
} else {
if (port_ == 80) {
headers.emplace("Host", host_);
} else {
headers.emplace("Host", host_and_port_);
}
}
}
if (!req.has_header("Accept")) { headers.emplace("Accept", "*/*"); }
if (!req.has_header("User-Agent")) {
headers.emplace("User-Agent", "cpp-httplib/0.5");
}
if (req.body.empty()) {
if (req.content_provider) {
auto length = std::to_string(req.content_length);
headers.emplace("Content-Length", length);
} else {
headers.emplace("Content-Length", "0");
}
} else {
if (!req.has_header("Content-Type")) {
headers.emplace("Content-Type", "text/plain");
}
if (!req.has_header("Content-Length")) {
auto length = std::to_string(req.body.size());
headers.emplace("Content-Length", length);
}
}
if (!basic_auth_username_.empty() && !basic_auth_password_.empty()) {
headers.insert(make_basic_authentication_header(
basic_auth_username_, basic_auth_password_, false));
}
if (!proxy_basic_auth_username_.empty() &&
!proxy_basic_auth_password_.empty()) {
headers.insert(make_basic_authentication_header(
proxy_basic_auth_username_, proxy_basic_auth_password_, true));
}
detail::write_headers(bstrm, req, headers);
// Flush buffer
auto &data = bstrm.get_buffer();
strm.write(data.data(), data.size());
// Body
if (req.body.empty()) {
if (req.content_provider) {
size_t offset = 0;
size_t end_offset = req.content_length;
DataSink data_sink;
data_sink.write = [&](const char *d, size_t l) {
auto written_length = strm.write(d, l);
offset += static_cast<size_t>(written_length);
};
data_sink.is_writable = [&](void) { return strm.is_writable(); };
while (offset < end_offset) {
req.content_provider(offset, end_offset - offset, data_sink);
}
}
} else {
strm.write(req.body);
}
return true;
}
这里构造了一个DataSink 它的write 是直接向socket stream中写入数据,然后会更新offset,然后在while 循环中,会更新length,也就是end_offset - offset(因为offset 更新了,length也更新了……)
至此使用content provider的分析完成了……
这里呢我有一点不懂,就是为啥不直接把Stream 暴露出来,非要整个DataSink呢? 接口就直接是Stream岂不更好。毕竟DataSink的write 也是直接调用的Stream 后来想了下,DataSink只有写的接口,而没有读,而Stream是读写接口,这样的话相当于一道封印,安全啊。唯一的难点就是对于DataSink理解,怎么理解呢,就是iostream的istream,只要这么理解就好理解了。
如果工作中碰到了,我就继续写。现在打算开始看看Qt的源码了,毕竟每次面试提问一问三不知还是不好的……
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。