1、工作原理
在当前主流的Web服务架构体系中,Cache担任着越来越重要的作用。常见的基于浏览器的C/S架构,Web Cache更是节约服务器资源的关键。而最近几年由FreeBSD创始人之一Kamp开发的varnish更是一个不可多得的Web Cache Server。严格意义上说,Varnish是一个高性能的反向代理软件,只不过与其出色的缓存功能相比,企业更愿意使用其搭建缓存服务器。同时,由于其工作在Web Server的前端,有一部分企业已经在生产环境中使用其作为旧版本的squid的替代方案,以在相同的服务器成本下提供更好的缓存效果,Varnish更是作为CDN缓存服务器的可选服务之一。
Varnish主要有以下几点特性:
1.缓存位置:可以使用内存也可以使用磁盘。如果要使用磁盘的话推荐SSD做RAID1
2.日志存储:日志也存储在内存中。存储策略:固定大小,循环使用
3.支持虚拟内存的使用。
4.有精确的时间管理机制,即缓存的时间属性控制。
5.状态引擎架构:在不同的引擎上完成对不同的缓存和代理数据进行处理。可以通过特定的配置语言设计不同的控制语句,以决定数据在不同位置以不同方式缓存。类似于netfilter中的钩子,在特定的地方对经过的报文进行特定规则的处理。
6.缓存管理:以二叉堆格式管理缓存数据,做到数据的及时清理。
摘自: http://blog.51cto.com/12205781/2049222
2、Keepalived + Haproxy + Varnish + LNMP 实战
2.1、架构拓扑
2.2、设备列表
Name
IP
Role
remark
Node01
192.168.1.233, vip:192.168.1.100
Haproxy master
Keepalived
Node02
192.168.1.111, vip:192.168.1.100
Haproxy backup
Keepalived
Node03
192.168.1.178
Varnish
agency
Node04
192.168.1.242
Web
Lnmp
Node05
192.168.1.128
Web
Lnmp
2.3、服务安装
2.3.1、Node4,Node5 上安装LNMP服务
rpm –ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
yum -y install nginx mariadb-server php-fpm php-mysql
优化nginx配置
vim /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 65535;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on; #启动高效传输文件的模式
tcp_nopush on; #包累计到一定大小后发送
tcp_nodelay on; #开启 关闭延迟发送
keepalive_timeout 60; #设定保持长连接超时时长
client_header_timeout 10; #设置请求头的超时时间
client_body_timeout 10; #设置请求体的超时时间
reset_timedout_connection on; #告诉nginx关闭不响应的客户端连接
send_timeout 10; #响应客户端超时时间,这个超时时间仅限于两个活动之间的时间,如果超过这个时间,客户端没有任何活动,nginx关闭连接
#limit_conn_zone $binary_remote_addr zone=addr:5m;
#limit_conn addr 100;
gzip on;
include /etc/nginx/conf.d/*.conf;
}
vim /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name website.com;
location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
location ~ \.php$ {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
检查语法配置与启动服务
ngint -t
systemctl restart nginx
查看系统文件描述符数量
ulimit -n
修改
ulimit -n 65535
修改php-fpm
vim /etc/php.ini
date.timezone = Asia/Shanghai
short_open_tag = On #允许短标签, php语法中可用短标签
启动服务
systemctl start nginx
systemctl start mariadb
systemctl start php-fpm
systemctl enable nginx
systemctl enable mariadb
systemctl enable php-fpm
2.3.2、node01,node02 haproxy & keepalived服务安装
yum install haproxy -y
yum install keepalived -y
配置haproxy, (frontend , backend , listen)
vim /etc/haproxy/haproxy.cfg
frontend web
bind *:80
default_backend varnish-server
backend varnish-server
balance roundrobin
server varnishserver 192.168.1.178:6081 check inter 3000 rise 3 fall 5
listen admin
bind :9527
stats enable
stats uri /haproxyadmin
stats auth admin:admin
stats refresh 20s
stats admin if TRUE
启动服务
systemctl start haproxy
systemctl enable haproxy
配置keepalived
node01
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from root@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id keepalived_haproxy
}
vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 2
fall 2
rise 2
weight -4
}
vrrp_instance VI_1 {
state MASTER
interface enp0s8
virtual_router_id 170
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass keer
}
virtual_ipaddress {
192.168.1.100
}
track_script {
chk_haproxy
}
node02 修改对应 state,priority,
启动服务
systemctl start keepalived
systemctl enable keepalived
2.3.3 node03 varnish配置
yum install epel-release
yum install varnish –y
修改配置文件
vim /etc/varnish/varnish.params #进程配置文件
VARNISH_ADMIN_LISTEN_PORT=6082
VARNISH_STORAGE="file,/data/cache,1G" # 缓存在本地磁盘
vim /etc/varnish/default.vcl #总配置文件
①定义一个健康监测:
vcl 4.0; #指定版本 import directors; #加载后端的轮询模块
probe backend_healthcheck { #设置名为backend_healthcheck的健康监测
.url = "/index.html";
.window = 5;
.threshold = 2;
.interval = 3s;
.timeout = 1s;
}
② 设置后端server:
backend web1 {
.host = "192.168.1.242"; #node04 ip
.port = "80";
.probe = backend_healthcheck;
}
backend web2 {
.host = "192.168.1.128"; #node05 ip
.port = "80";
.probe = backend_healthcheck;
}
③ 配置后端集群事件:
sub vcl_init {
new img_cluster = directors.round_robin();
img_cluster.add_backend(web1);
img_cluster.add_backend(web2);
}
acl purgers { # 定义可访问来源IP,权限控制
"127.0.0.1";
"192.168.0.0"/16;
}
sub vcl_recv {
if (req.method == "GET" && req.http.cookie) {
return(hash); //处理完recv 引擎,给下一个hash引擎处理
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "PURGE" &&
req.method != "DELETE") {
return (pipe); #除了上边的请求头部,通过通道直接扔后端的pass
}
if (req.url ~ "index.php") {
return(pass);
}
if (req.method == "PURGE") { #PURGE请求的处理的头部,清缓存
if (client.ip ~ purgers) {
return(purge);
}
}
if (req.http.X-Forward-For) { #为发往后端主机的请求添加X-Forward-For首部
set req.http.X-Forward-For = req.http.X-Forward-For + "," + client.ip;
}
else {
set req.http.X-Forward-For = client.ip;
}
return(hash);
}
sub vcl_backend_response {
if (bereq.url ~ "\.(jpg|jpeg|gif|png)$") {
set beresp.ttl = 30d;
}
if (bereq.url ~ "\.(html|css|js)$") {
set beresp.ttl = 7d;
}
if (beresp.http.Set-Cookie) { #定义带Set-Cookie首部的后端响应不缓存,直接返回给客户端
set beresp.grace = 30m;
return(deliver);
}
}
sub vcl_deliver {
if (obj.hits > ) { #为响应添加X-Cache首部,显示缓存是否命中
set resp.http.X-Cache = "HIT from " + server.ip;
}
else {
set resp.http.X-Cache = "MISS";
}
unset resp.http.X-Powered-By; #取消显示php框架版本的header头
unset resp.http.Via; #取消显示varnish的header头
unset resp.http.Server;
unset resp.http.X-Drupal-Cache;
unset resp.http.Link;
unset resp.http.X-Varnish;
}
sub vcl_hash {
hash_data(req.url);
}
启动服务
service varnish start
service enable start
测试
Varnish服务器上查看
varnishstat
手机扫一扫
移动阅读更方便
你可能感兴趣的文章