市面上有很多CDN供应商,常见的有:
Akamai (全球最大)
webluker
cloudflare
chinacache(蓝汛)
网宿CDN
帝联CDN
阿里CDN(web cache server 叫swift)
腾讯CDN
市面上有很多CDN供应商,常见的有:
Akamai (全球最大)
webluker
cloudflare
chinacache(蓝汛)
网宿CDN
帝联CDN
阿里CDN(web cache server 叫swift)
腾讯CDN
缓存控制头部:
根据规范定义Cache-Control优先级高于Expires,实际使用时可以两个都用,或仅使用Cache-Control就可以了。一般情况下Expires=当前系统时间(Date) + 缓存时间(Cache-Control: max-age)。
请求一个已浏览器过的请求时,头部变化:
1. Modified-Since请求头,其值是上次请求响应中的Last-Modified,即浏览器会拿这个时间去服务端验证内容是否发生了变更。
2. If-None-Match请求头,其值是上次请求响应中的ETag,即浏览器会拿这个时间去服务端验证内容是否发生了变更。
Last-Modified与ETag同时使用时,浏览器在验证时会同时发送If-Modified-Since和If-None-Match,按照http/1.1规范,如果同时使用If-Modified-Since和If-None-Match则服务端必须两个都验证通过后才能返回304;且nginx就是这样做的。因此实际使用时应该根据实际情况选择。
当我们按下cmd-F5强制刷新后:
浏览器在请求时不会带上If-Modified-Since,并带上Cache-Control:no-cache和Pragma:no-cache,这是为了告诉服务端说我请给我一份最新的内容。
网络应用程序,分为前端和后端两个部分。当前的发展趋势,就是前端设备层出不穷(手机、平板、桌面电脑、其他专用设备......)。
因此,必须有一种统一的机制,方便不同的前端设备与后端进行通信。
这导致API构架的流行,甚至出现"API First"的设计思想。
RESTful API是目前比较成熟的一套互联网应用程序的API设计理论。
我以前写过一篇《理解RESTful架构》,探讨如何理解这个概念。
今天,我将介绍RESTful API的设计细节,探讨如何设计一套合理、好用的API。我的主要参考了两篇文章(1,2)。
API与用户的通信协议,总是使用HTTPs协议。
应该尽量将API部署在专用域名之下。
https://api.example.com
如果确定API很简单,不会有进一步扩展,可以考虑放在主域名下。
https://example.org/api/
应该将API的版本号放入URL。
https://api.example.com/v1/
另一种做法是,将版本号放在HTTP头信息中,但不如放入URL方便和直观。Github采用这种做法。
路径又称"终点"(endpoint),表示API的具体网址。
在RESTful架构中,每个网址代表一种资源(resource),所以网址中不能有动词,只能有名词,而且所用的名词往往与数据库的表格名对应。一般来说,数据库中的表都是同种记录的"集合"(collection),所以API中的名词也应该使用复数。
举例来说,有一个API提供动物园(zoo)的信息,还包括各种动物和雇员的信息,则它的路径应该设计成下面这样。
- https://api.example.com/v1/zoos
- https://api.example.com/v1/animals
- https://api.example.com/v1/employees
对于资源的具体操作类型,由HTTP动词表示。
常用的HTTP动词有下面五个(括号里是对应的SQL命令)。
- GET(SELECT):从服务器取出资源(一项或多项)。
- POST(CREATE):在服务器新建一个资源。
- PUT(UPDATE):在服务器更新资源(客户端提供改变后的完整资源)。
- PATCH(UPDATE):在服务器更新资源(客户端提供改变的属性)。
- DELETE(DELETE):从服务器删除资源。
还有两个不常用的HTTP动词。
- HEAD:获取资源的元数据。
- OPTIONS:获取信息,关于资源的哪些属性是客户端可以改变的。
下面是一些例子。
- GET /zoos:列出所有动物园
- POST /zoos:新建一个动物园
- GET /zoos/ID:获取某个指定动物园的信息
- PUT /zoos/ID:更新某个指定动物园的信息(提供该动物园的全部信息)
- PATCH /zoos/ID:更新某个指定动物园的信息(提供该动物园的部分信息)
- DELETE /zoos/ID:删除某个动物园
- GET /zoos/ID/animals:列出某个指定动物园的所有动物
- DELETE /zoos/ID/animals/ID:删除某个指定动物园的指定动物
如果记录数量很多,服务器不可能都将它们返回给用户。API应该提供参数,过滤返回结果。
下面是一些常见的参数。
- ?limit=10:指定返回记录的数量
- ?offset=10:指定返回记录的开始位置。
- ?page=2&per_page=100:指定第几页,以及每页的记录数。
- ?sortby=name&order=asc:指定返回结果按照哪个属性排序,以及排序顺序。
- ?animal_type_id=1:指定筛选条件
参数的设计允许存在冗余,即允许API路径和URL参数偶尔有重复。比如,GET /zoo/ID/animals 与 GET /animals?zoo_id=ID 的含义是相同的。
服务器向用户返回的状态码和提示信息,常见的有以下一些(方括号中是该状态码对应的HTTP动词)。
- 200 OK - [GET]:服务器成功返回用户请求的数据,该操作是幂等的(Idempotent)。
- 201 CREATED - [POST/PUT/PATCH]:用户新建或修改数据成功。
- 202 Accepted - [*]:表示一个请求已经进入后台排队(异步任务)
- 204 NO CONTENT - [DELETE]:用户删除数据成功。
- 400 INVALID REQUEST - [POST/PUT/PATCH]:用户发出的请求有错误,服务器没有进行新建或修改数据的操作,该操作是幂等的。
- 401 Unauthorized - [*]:表示用户没有权限(令牌、用户名、密码错误)。
- 403 Forbidden - [*] 表示用户得到授权(与401错误相对),但是访问是被禁止的。
- 404 NOT FOUND - [*]:用户发出的请求针对的是不存在的记录,服务器没有进行操作,该操作是幂等的。
- 406 Not Acceptable - [GET]:用户请求的格式不可得(比如用户请求JSON格式,但是只有XML格式)。
- 410 Gone -[GET]:用户请求的资源被永久删除,且不会再得到的。
- 422 Unprocesable entity - [POST/PUT/PATCH] 当创建一个对象时,发生一个验证错误。
- 500 INTERNAL SERVER ERROR - [*]:服务器发生错误,用户将无法判断发出的请求是否成功。
状态码的完全列表参见这里。
如果状态码是4xx,就应该向用户返回出错信息。一般来说,返回的信息中将error作为键名,出错信息作为键值即可。
{ error:"Invalid API key" }
针对不同操作,服务器向用户返回的结果应该符合以下规范。
- GET /collection:返回资源对象的列表(数组)
- GET /collection/resource:返回单个资源对象
- POST /collection:返回新生成的资源对象
- PUT /collection/resource:返回完整的资源对象
- PATCH /collection/resource:返回完整的资源对象
- DELETE /collection/resource:返回一个空文档
RESTful API最好做到Hypermedia,即返回结果中提供链接,连向其他API方法,使得用户不查文档,也知道下一步应该做什么。
比如,当用户向api.example.com的根目录发出请求,会得到这样一个文档。
{ "link": { "rel": "collection https://www.example.com/zoos", "href": "https://api.example.com/zoos", "title": "List of zoos", "type":"application/vnd.yourformat+json" } }
上面代码表示,文档中有一个link属性,用户读取这个属性就知道下一步该调用什么API了。rel表示这个API与当前网址的关系(collection关系,并给出该collection的网址),href表示API的路径,title表示API的标题,type表示返回类型。
Hypermedia API的设计被称为HATEOAS。Github的API就是这种设计,访问api.github.com会得到一个所有可用API的网址列表。
{ "current_user_url": "https://api.github.com/user", "authorizations_url": "https://api.github.com/authorizations", // ... }
从上面可以看到,如果想获取当前用户的信息,应该去访问api.github.com/user,然后就得到了下面结果。
{ "message": "Requires authentication", "documentation_url": "https://developer.github.com/v3" }
上面代码表示,服务器给出了提示信息,以及文档的网址。
(1)API的身份认证应该使用OAuth 2.0框架。
(2)服务器返回的数据格式,应该尽量使用JSON,避免使用XML。
(完)
文章摘自: http://www.ruanyifeng.com/blog/2014/05/restful_api.html
参考:
理解RESTful架构 http://www.ruanyifeng.com/blog/2011/09/restful.html
好RESTful API的设计原则 https://www.cnblogs.com/moonz-wu/p/4211626.html
http://www.informit.com/articles/article.aspx?p=1566460
..
A few days ago I enabled HTTPS and SSL/TLS on this blog. A big barrier to enabling SSL on your website is the cost of the SSL certificate and the maintenance overhead of having to constantly renew your certificate. You could already get free SSL certificates with StartSSL, but the process of obtaining the certificate is still a manual process. A few months ago Mozilla and a bunch of companies came together and created Letsencrypt, a service which issues free SSL certificates that are automatically generated with a command line tool. When set up correctly, it alleviates the need for manual intervention. As of the writing of this blog post, the service is still in beta and support for Nginx is minimal, but it’s not difficult to set up.
Since the software is still in beta, the only way to get it is via letsencrypt github. First we need to pull the repository:
Update 15/05/2016 – previously named letsencrypt-auto, the certificate utility is now called certbot-auto
We then get the certbot-auto executable. Running ./certbot-auto --help
will give you the available commands. The current version has built in support for Apache, with nginx under testing. But as long as we get the certificate, we could install it to any software supported or not. I will illustrate the (initially) manual way of getting the certificate with nginx.
To obtain a certificate, ownership of the domain needs to be verified. Letsencrypt achieves this by checking for the contents of a file under http://www.example.com/.well-known/acme-challenge/. There are two ways given by certbot-auto to make this file available – by spawning a standalone server to listen on port 80 (--standalone
), or by adding the files to the root folder of the web site (--webroot
). The problem with standalone
is that you have to stop your active web server on port 80, causing downtime, where as webroot allows your current server to continue operating. So webroot is the solution we will use.
Obtain a certificate with webroot, by calling this command (replacing example.com and /var/www/example obviously!). You can append more domains with -d
.
The above method requires you to have a physical root folder. If you are using nginx as a load balancer or reverse proxy (i.e. proxy_pass
), you most likely won’t have a root for your domain. In those cases, you could add a location alias to your nginx.conf
under the HTTP (port 80) server
directive for the domain:
Then run:
The first time you run certbot, it will ask you to agree to the TOS and register your email. After that, the operation is pretty much unattended.
After the command finishes, you should have your certificate at /etc/letsencrypt/live/example.com/fullchain.pem and private key at /etc/letsencrypt/live/example.com/privkey.pem
Modify you nginx configuration file to enable SSL:
You may also want to add Strict-Transport-Security (HSTS) to your config, such that any internal links that are not https will automatically be routed to the HTTPS version during a HTTPS session.
Lastly, reload the nginx configuration:
Let’s Encrypt certificates are only valid for 3 months after issue. So every 3 months, renewal is required. Since the process of obtaining the certificate is through the command line, this process could be automated. We could set up a cron job which takes care of the renewal, like the below:
The above cron job is set up to run daily, but the certificate is only renewed if less than 30 days to expiry.
links: https://loune.net/2016/01/https-with-lets-encrypt-ssl-and-nginx/
OpenSSL 是 SSL 和 TLS 协议的开放式源代码实现。它在标准通信层上提供了加密传输层,允许其与诸多网络应用程序和服务相结合。Cloud Management Console 中的缺省 SSL 概要文件具有通用公共名称。将 SSL 概要文件关联至网关集群时,如果使用缺省 SSL 概要文件,那么发出 API 调用的应用程序可能无法根据提供的证书验证其所连接到的主机名。在这种情况下,您可以生成一个新的自签名证书, 以表示应用程序可以验证的公共名称。该主题告诉您如何使用 OpenSSL 工具箱生成自签名 SSL 证书,以启用 HTTPS 连接。
要使用 OpenSSL 生成自签名 SSL 证书,请完成以下步骤:
本文摘自:http://www.williamsang.com/archives/1645.html
快速安装
执行yum install squid
复制最下面的配置文件覆盖默认配置文件squid.conf
生成密码文件(见下图)
启动squid:service squid start
配置文件介绍
################################# ### acl和http_pass访问控制 ### ################################# acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # 以下定义了局域网地址localnet(需要了解一些网络里面的私有地址概念) acl localnet src 10.0.0.0/8 acl localnet src 172.16.0.0/12 acl localnet src 192.168.0.0/16 acl localnet src fc00::/7 acl localnet src fe80::/10 # 定义SSL_ports为443 acl SSL_ports port 443 # 定义了Safe_ports所代表的端口 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http # 定义CONNECT代表http里的CONNECT请求方法 acl CONNECT method CONNECT # 允许本机管理缓存 http_access allow manager localhost # 拒绝其他地址管理缓存 http_access deny manager # 拒绝不安全端口请求 http_access deny !Safe_ports # 不允许连接非安全SSL_ports端口 http_access deny CONNECT !SSL_ports # 拒绝连接到本地服务器提供的服务 # (用于保护本机一些只有本机用户才能访问的服务) # http_access deny to_localhost # 允许局域网用户的请求 http_access allow localnet # 允许本机用户的请求 http_access allow localhost # 拒绝其他所有请求 http_access deny all ################################# ### squid 服务器基本配置 ### ################################# # Squid的监听端口 http_port 3128 ################################# ### squid 缓存配置 ### ################################# # 出现cgi-bin或者?的URL不予缓存 hierarchy_stoplist cgi-bin ? # 磁盘缓存目录 #cache_dir ufs /var/spool/squid 100 16 256 # squid挂掉后,临终遗言要放到哪里 coredump_dir /var/spool/squid # 刷新缓存规则 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320
acl控制格式:
acl 列表名称 列表类型 列表值1 列表值2 ...
列表名称:自己定义的,用于下面进行权限控制(allow/deny)。
列表类型:
http_access deny/allow acl列表名称
这里需要注意的是:http_access按照定义的顺序来过滤请求的,只要前面被allow/deny了下面就不再判断了。
举个例子:
acl manager proto cache_object
http_access allow manager localhost
http_access deny manager
第一句指明manager为缓存管理,后面限定只能本机进行缓存管理。
squid服务器自身配置
http_port 3128
设置squid监听端口,可以自己更换,客户端链接的时候需要指定这个端口。
缓存配置:
cache_dir ufs /var/spool/squid 100 16 256
这里设置磁盘缓存的目录以及空间大小,目录结构等,格式为:
cache_dir schema directory size L1 L2 [option]
coredump_dir /var/spool/squid
squid挂掉后,临终遗言要放到哪里,一般也不需要处理
hierarchy_stoplist
后跟着字符串列表,请求的URL里面包含字符串列表里的任何一个字符就不予缓存。
hierarchy_stoplist cgi-bin ?
上述语句表示:任何包含?或cgi-bin字符串的请求的URL不予缓存直接把最终服务器结果返回给用户。
refresh_pattern
使用正则表达式来确定资源过期时间。
refresh_pattern <最小时间> <百分比> <最大时间>
如何判断过期?
如下是squid的refresh_pattern算法的简单描述:
假如响应年龄超过refresh_pattern的max值,该响应过期;
假如LM-factor少于refresh_pattern百分比值,该响应存活;
假如响应年龄少于refresh_pattern的min值,该响应存活;
其他情况下,响应过期。
LM-factor算法:
resource age =对象进入cache的时间-对象的last_modified
response age =当前时间-对象进入cache的时间
LM-factor=(response age)/(resource age)
这里多介绍点额外选项:
设定DNS服务器
日志记录
默认都在/var/log/squid目录下,默认安装时候就配置好了日志轮转,查看/etc/logrotate.d/squid 如果自己更改日志存放位置,最好重新配置自动轮转,关于logrotate参看:logrotate使用:
启动squid PID 的用户和群组,默认都为squid
内存缓冲大小:设定额外提供多少内存给squid使用。
cache_mem 20M
总内存占用计算方法
squid自身进程占用(10~20M)+ Cache目录放在内存的索引(500M目录大约20M索引)+cache_mem
官方建议可用内存大小要是这里计算量的两倍以上。因此这里的cache_mem不能设置太大。当然内存充足的话,越大越好。
Cache目录放在内存的索引(500M目录大约20M索引)这里其实是估算,没法准确计算,一般64位系统中,每个缓存索引占用112字节内存。这样如果一个缓存文件大小在1K到20K之间则,大概内存花费为:50M~3M之间。其实通过查看/var/spool/squid 目录下的缓存文件大小你可以发现基本都是几K的偏多。所以取20M还是比较合理的。
密码登录
一般我们使用代理都是有密码的,不然谁都可以用,服务器还不得挂掉,怎么在squid中是用密码验证呢?
先设定验证相关的验证参数:
# 密码存储路径,设定通过ncsa_auth程序来读取
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/htpasswd
auth_param basic children 5
# 设定通过验证时,呈现给用户的欢迎信息,可以不写
auth_param basic realm “Welcome to proxy web server”
# 验证一次,可以持续访问多长时间
auth_param basic credentialsttl 12 hours
auth_param basic casesensitive off
# 设定acl密码用户
acl squid_user proxy_auth REQUIRED
# 允许密码用户登录
http_access allow squid_user
生成用户名密码文件,这里使用william作为用户名,test1234作为密码,见下图:
touch /etc/squid/htpasswd
htpasswd /etc/squid/htpasswd william
这里密码最好放在/etc/squid/目录下,不然很容易有权限问题,导致squid启动失败
注:这里的htpasswd命令是apache服务器自带的。你执行下面的名利安装apache
The ngx_http_core_module
module supports embedded variables with names matching the Apache Server variables. First of all, these are variables representing client request header fields, such as$http_user_agent
, $http_cookie
, and so on. Also there are other variables:
$arg_
name
name
in the request line$args
$binary_remote_addr
$body_bytes_sent
%B
” parameter of the mod_log_config
Apache module$bytes_sent
$connection
$connection_requests
$content_length
$content_type
name
cookie$document_root
$document_uri
$uri
$host
$hostname
$http_
name
$https
on
” if connection operates in SSL mode, or an empty string otherwise$is_args
?
” if a request line has arguments, or an empty string otherwise$limit_rate
$msec
$nginx_version
$pid
$pipe
p
” if request was pipelined, “.
” otherwise (1.3.12, 1.2.7)$proxy_protocol_addr
The PROXY protocol must be previously enabled by setting the proxy_protocol
parameter in thelisten directive.
$query_string
$args
$realpath_root
$remote_addr
$remote_port
$remote_user
$request
$request_body
The variable’s value is made available in locations processed by the proxy_pass,fastcgi_pass, uwsgi_pass, and scgi_pass directives.
$request_body_file
At the end of processing, the file needs to be removed. To always write the request body to a file, client_body_in_file_only needs to be enabled. When the name of a temporary file is passed in a proxied request or in a request to a FastCGI/uwsgi/SCGI server, passing the request body should be disabled by the proxy_pass_request_body off, fastcgi_pass_request_body off, uwsgi_pass_request_body off, or scgi_pass_request_body off directives, respectively.
$request_completion
OK
” if a request has completed, or an empty string otherwise$request_filename
$request_length
$request_method
GET
” or “POST
”$request_time
$request_uri
$scheme
http
” or “https
”$sent_http_
name
$server_addr
Computing a value of this variable usually requires one system call. To avoid a system call, the listen directives must specify addresses and use the bind
parameter.
$server_name
$server_port
$server_protocol
HTTP/1.0
” or “HTTP/1.1
”$status
$tcpinfo_rtt
, $tcpinfo_rttvar
, $tcpinfo_snd_cwnd
, $tcpinfo_rcv_space
TCP_INFO
socket option$time_iso8601
$time_local
$uri
The value of $uri
may change during request processing, e.g. when doing internal redirects, or when using index files.
# This is a basic VCL configuration file for varnish. See the vcl(7) # man page for details on VCL syntax and semantics. # # Default backend definition. Set this to point to your content # server. # probe healthcheck { .url = "/"; .interval = 60s; .timeout = 0.3s; .window = 8; .threshold = 3; .initial = 3; } #backend local { # .host = "127.0.0.1"; # .port = "80"; #} backend web244 { .host = "192.168.2.244"; .port = "80"; .probe = healthcheck; } backend web243 { .host = "192.168.2.243"; .port = "80"; .probe = healthcheck; } backend web232 { .host = "192.168.2.232"; .port = "80"; .probe = healthcheck; } director default round-robin { { .backend = web244; } { .backend = web243; } } director image fallback{ { .backend = web232; } { .backend = web243; } } # # Below is a commented-out copy of the default VCL logic. If you # redefine any of these subroutines, the built-in logic will be # appended to your code. sub vcl_recv { # if (req.restarts == 0) {} if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } if (req.http.host == "static.aooshi.org" || req.http.host == "image.aooshi.org" ) { set req.backend = image; } # if (req.request != "GET" && # req.request != "HEAD" && # req.request != "PUT" && # req.request != "POST" && # req.request != "TRACE" && # req.request != "OPTIONS" && # req.request != "DELETE") { # /* Non-RFC2616 or CONNECT which is weird. */ # return (pipe); # } if (req.request != "GET" && req.request != "HEAD") { /* We only deal with GET and HEAD by default */ return (pass); } # if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { # /* Not cacheable by default */ return (pass); } return (lookup); } # # sub vcl_pipe { # # Note that only the first request to the backend will have # # X-Forwarded-For set. If you use X-Forwarded-For and want to # # have it set for all requests, make sure to have: # # set bereq.http.connection = "close"; # # here. It is not set by default as it might break some broken web # # applications, like IIS with NTLM authentication. # return (pipe); # } # # sub vcl_pass { # return (pass); # } # # sub vcl_hash { # hash_data(req.url); # if (req.http.host) { # hash_data(req.http.host); # } else { # hash_data(server.ip); # } # return (hash); # } # sub vcl_hit { return (deliver); } # # sub vcl_miss { # return (fetch); # } # sub vcl_fetch { if (beresp.ttl <= 0s || beresp.http.Set-Cookie || beresp.http.Vary == "*" || beresp.http.Pragma == "no-cache" || beresp.http.Cache-Control == "no-cache" || beresp.http.Cache-Control == "private" || beresp.status != 200 ) { # /* # * Mark as "Hit-For-Pass" for the next 2 minutes # */ # set beresp.ttl = 120 s; return (hit_for_pass); } return (deliver); } # sub vcl_deliver { remove resp.http.X-Varnish; # remove resp.http.Via; if (obj.hits > 0) { set resp.http.X-Cache = "HIT from " + server.hostname; } return (deliver); } # # sub vcl_error { # set obj.http.Content-Type = "text/html; charset=utf-8"; # set obj.http.Retry-After = "5"; # synthetic {" # <?xml version="1.0" encoding="utf-8"?> # <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" # "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> # <html> # <head> # <title>"} + obj.status + " " + obj.response + {"</title> # </head> # <body> # <h1>Error "} + obj.status + " " + obj.response + {"</h1> # <p>"} + obj.response + {"</p> # <h3>Guru Meditation:</h3> # <p>XID: "} + req.xid + {"</p> # <hr> # <p>Varnish cache server</p> # </body> # </html> # "}; # return (deliver); # } # # sub vcl_init { # return (ok); # } # # sub vcl_fini { # return (ok); # }
其它:
(1) Receive状态,也就是请求处理的入口状态,根据VCL规则判断该请求应该是Pass或Pipe,或者进入Lookup(本地查询)。
(2) Lookup状态,进入此状态后,会在hash表中查找数据,若找到,则进入Hit状态,否则进入miss状态。
(3) Pass状态,在此状态下,会进入后端请求,即进入fetch状态。
(4) Fetch状态,在Fetch状态下,对请求进行后端的获取,发送请求,获得数据,并进行本地的存储。
(5) Deliver状态, 将获取到的数据发送给客户端,然后完成本次请求。
实例解析:
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
import directors;
probe backend_healthcheck { # 创建健康监测
.url = /health.html;
.window = 5;
.threshold = 2;
.interval = 3s;
}
backend web1 { # 创建后端主机
.host = "static1.lnmmp.com";
.port = "80";
.probe = backend_healthcheck;
}
backend web2 {
.host = "static2.lnmmp.com";
.port = "80";
.probe = backend_healthcheck;
}
backend img1 {
.host = "img1.lnmmp.com";
.port = "80";
.probe = backend_healthcheck;
}
backend img2 {
.host = "img2.lnmmp.com";
.port = "80";
.probe = backend_healthcheck;
}
vcl_init { # 创建后端主机组,即directors
new web_cluster = directors.random();
web_cluster.add_backend(web1);
web_cluster.add_backend(web2);
new img_cluster = directors.random();
img_cluster.add_backend(img1);
img_cluster.add_backend(img2);
}
acl purgers { # 定义可访问来源IP
"127.0.0.1";
"192.168.0.0"/24;
}
sub vcl_recv {
if (req.request == "GET" && req.http.cookie) { # 带cookie首部的GET请求也缓存
return(hash);
}
if (req.url ~ "test.html") { # test.html文件禁止缓存
return(pass);
}
if (req.request == "PURGE") { # PURGE请求的处理
if (!client.ip ~ purgers) {
return(synth(405,"Method not allowed"));
}
return(hash);
}
if (req.http.X-Forward-For) { # 为发往后端主机的请求添加X-Forward-For首部
set req.http.X-Forward-For = req.http.X-Forward-For + "," + client.ip;
} else {
set req.http.X-Forward-For = client.ip;
}
if (req.http.host ~ "(?i)^(www.)?lnmmp.com$") { # 根据不同的访问域名,分发至不同的后端主机组
set req.http.host = "www.lnmmp.com";
set req.backend_hint = web_cluster.backend();
} elsif (req.http.host ~ "(?i)^images.lnmmp.com$") {
set req.backend_hint = img_cluster.backend();
}
return(hash);
}
sub vcl_hit { # PURGE请求的处理
if (req.request == "PURGE") {
purge;
return(synth(200,"Purged"));
}
}
sub vcl_miss { # PURGE请求的处理
if (req.request == "PURGE") {
purge;
return(synth(404,"Not in cache"));
}
}
sub vcl_pass { # PURGE请求的处理
if (req.request == "PURGE") {
return(synth(502,"PURGE on a passed object"));
}
}
sub vcl_backend_response { # 自定义缓存文件的缓存时长,即TTL值
if (req.url ~ "\.(jpg|jpeg|gif|png)$") {
set beresp.ttl = 7200s;
}
if (req.url ~ "\.(html|css|js)$") {
set beresp.ttl = 1200s;
}
if (beresp.http.Set-Cookie) { # 定义带Set-Cookie首部的后端响应不缓存,直接返回给客户端
return(deliver);
}
}
sub vcl_deliver {
if (obj.hits > 0) { # 为响应添加X-Cache首部,显示缓存是否命中
set resp.http.X-Cache = "HIT from " + server.ip;
} else {
set resp.http.X-Cache = "MISS";
}
}
采用LVS-DR模式,real-server是windows2008
需要的特别操作:
1.新增环回网卡 (lMicrosoft Loopback Adapter)
2.配置环回ip为vip的ip,掩码255.255.255.255
3.修改网卡信息
在命令行下修改,开始--运行--cmd
netsh interface ipv4 set interface "实节点网卡名字" weakhostsend=enabled
netsh interface ipv4 set interface "实节点网卡名字" weakhostreceive=enabled
netsh interface ipv4 set interface "实节点环回网卡名字" weakhostreceive=enabled
netsh interface ipv4 set interface "实节点环回网卡名字" weakhostsend=enabled
windows 2008的默认中,网卡的stronghost处于启用状态,这个设置可以防止跨接口转发数据包,这就表明:来自一个网络适配器的请求不会被环回适配器处理,因为这个请求来自于不同的网络适配器。为了将环回适配器从stronghost切换为weakhost,需要运行以上四条命令,要不然TCP的状态会一直处于SYN_RECV 状态
• netsh interface ipv4 set interface [InterfaceNameOrIndex] weakhostsend=enabled|disabled • netsh interface ipv4 set interface [InterfaceNameOrIndex] weakhostreceive=enabled|disabled • netsh interface ipv6 set interface [InterfaceNameOrIndex] weakhostsend=enabled|disabled • netsh interface ipv6 set interface [InterfaceNameOrIndex] weakhostreceive=enabled|disabled |
netsh interface ipv6 show interface
http://blog.loadbalancer.org/direct-server-return-on-windows-2008-using-loopback-adpter/
http://technet.microsoft.com/zh-cn/magazine/2007.09.cableguy.aspx