Linux下使用Ansible处理批量操作
阅读原文时间:2021年07月14日阅读:1

Ansible介绍:

  • ansible是一款为类unix系统开发的自由开源的配置和自动化工具。它用python写成,类似于saltstack和puppet,但是不同点是ansible不需要再节点中安装任何客户端。它使用ssh来通信。它基于python的paramiko开发,分布式,无需任何客户端,轻量级,配置语法使用ymal及jinja2模板语言,更强的远程命令执行操作。

Ansibe特性:

  • 部署简单,只需在主控端部署Ansible环境,被控端无需做任何操作。
  • 默认使用SSH协议对设备进行管理。
  • 有大量常规运维操作模块,可实现日常绝大部分操作。
  • 配置简单、功能强大、扩展性强;
  • 支持API及自定义模块,可通过Python轻松扩展。
  • 通过Playbooks来定制强大的配置、状态管理。
  • 轻量级,无需在客户端安装agent,更新时,只需在操作机上进行一次更新即可。
  • 提供一个功能强大、操作性强的Web管理界面和REST API接口——AWX平台。
  • 支持非root用户管理操作,支持sudo。

Ansible架构:

    

  

核心组件构成:

  • ansible(主体):ansible的核心程序,提供一个命令行接口给用户对ansible进行管理操作;
  • Host Inventory(主机清单):为Ansible定义了管理主机的策略。一般小型环境下我们只需要在host文件中写入主机的IP地址即可,但是到了中大型环境我们有可能需要使用静态inventory或者动态主机清单来生成我们所需要执行的目标主机。
  • Core Modules(核心模块):Ansible执行命令的功能模块,多数为内置的核心模块。
  • Custom Modules(拓展模块):如何ansible自带的模块无法满足我么你的需求,用户可自定义相应的模块来满足自己的需求。
  • Connection Plugins(连接插件):模块功能的补充,如连接类型插件、循环插件、变量插件、过滤插件等,该功能不常用
  • Playbook(任务剧本):编排定义ansible任务集的配置文件,由ansible顺序依次执行,通常是JSON格式的* YML文件
  • API:供第三方程序调用的应用程序编程接口

Ansible能做什么?

 ansible可以帮助运维人员完成一些批量任务,或者完成一些需要经常重复的工作。

  • 比如:同时在100台服务器上安装nginx服务,并在安装后启动服务。
  • 比如:将某个文件一次性拷贝到100台服务器上。
  • 比如:每当有新服务器加入工作环境时,运维人员都要为新服务器部署某个服务。

其他详情见官方文档:https://docs.ansible.com/ansible/2.9/index.html

环境准备:

属性

管理机

服务器-01

服务器-02

节点

wenCheng

Server-01

Server-02

系统

CentOS Linux release 7.5.1804 (Minimal)

CentOS Linux release 7.5.1804 (Minimal)

CentOS Linux release 7.5.1804 (Minimal)

内核

3.10.0-862.el7.x86_64

3.10.0-862.el7.x86_64

3.10.0-862.el7.x86_64

SELinux

setenforce 0 | disabled

setenforce 0 | disabled

setenforce 0 | disabled

Firewlld

systemctl stop/disable firewalld

systemctl stop/disable firewalld

systemctl stop/disable firewalld

IP地址

172.16.70.37

172.16.70.181

172.16.70.182

Ansible常用参数及语法。使用详情见官方模块文档:https://docs.ansible.com/ansible/2.9/modules

Ansible常用模块
ping 模块: 检查指定节点机器是否还能连通,用法很简单,不涉及参数,主机如果在线,则回复pong 。
raw 模块: 执行原始的命令,而不是通过模块子系统。
yum 模块: RedHat和CentOS的软件包安装和管理工具。
apt 模块: Ubuntu/Debian的软件包安装和管理工具。
pip 模块 : 用于管理Python库依赖项,为了使用pip模块,必须提供参数name或者requirements。
synchronize 模块: 使用rsync同步文件,将主控方目录推送到指定节点的目录下。
template 模块: 基于模板方式生成一个文件复制到远程主机(template使用Jinjia2格式作为文件模版,进行文档内变量的替换的模块。
copy 模块: 在远程主机执行复制操作文件。
user 模块 与 group 模块: user模块是请求的是useradd, userdel, usermod三个指令,goup模块请求的是groupadd, groupdel, groupmod 三个指令。
service 或 systemd 模块: 用于管理远程主机的服务。
get_url 模块: 该模块主要用于从http、ftp、https服务器上下载文件(类似于wget)。
fetch 模块: 它用于从远程机器获取文件,并将其本地存储在由主机名组织的文件树中。
file 模块: 主要用于远程主机上的文件操作。
lineinfile 模块: 远程主机上的文件编辑模块
unarchive模块: 用于解压文件。
command模块 和 shell模块: 用于在各被管理节点运行指定的命令. shell和command的区别:shell模块可以特殊字符,而command是不支持
hostname模块: 修改远程主机名的模块。
script模块: 在远程主机上执行主控端的脚本,相当于scp+shell组合。
stat模块: 获取远程文件的状态信息,包括atime,ctime,mtime,md5,uid,gid等信息。
cron模块: 远程主机crontab配置。
mount模块: 挂载文件系统。
find模块: 帮助在被管理主机中查找符合条件的文件,就像 find 命令一样。
selinux模块:远程管理受控节点的selinux的模块

Ansible语法及配置参数
语法格式:
ansible -m -a
也就是:
ansible 匹配模式 -m 模块 -a '需要执行的内容'
解释说明:
  匹配模式:即哪些机器生效 (可以是某一台, 或某一组, 或all) , 默认模块为command , 执行常规的shell命令.

情景一:Ansible安装部署及首次批量分发公钥(管理机)。

  • command模块 和 shell模块: 用于在各被管理节点运行指定的命令.;shell和command的区别:shell模块可以特殊字符,而command是不支持。

[root@wenCheng ~]# yum install epel-release -y
[root@wenCheng ~]# yum install ansible -y
[root@wenCheng ~]# ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

[root@wenCheng ~]# rpm -qa | grep ansible
ansible-2.9.21-1.el7.noarch
[root@wenCheng ~]# rpm -ql ansible-2.9.21-1.el7.noarch | less
/etc/ansible/ansible.cfg #主配置文件,配置ansible工作特性
/etc/ansible/hosts #主机清单
/etc/ansible/roles/ #存放角色的目录
/usr/bin/ansible #主程序,临时命令执行工具
/usr/bin/ansible-doc #查看配置文档,模块功能查看工具
/usr/bin/ansible-galaxy #下载/上传优秀代码或Roles模块的官网平台
/usr/bin/ansible-playbook #定制自动化任务,编排剧本工具
/usr/bin/ansible-pull #远程执行命令的工具
/usr/bin/ansible-vault #文件加密工具
/usr/bin/ansible-console #基于Console界面与用户交互的执行工具
……

备份配置

[root@wenCheng ~]# cp /etc/ansible/hosts{,.bak}
[root@wenCheng ~]# cp /etc/ansible/ansible.cfg{,.bak}
[root@wenCheng ~]# vim /etc/ansible/hosts
……

末行添加内容

远程主机(根据实际情况):单IP/IP段 用户名 密码 端口;下面举例2类形式

[type1]
172.16.70.181
172.16.70.182
[type1:vars]
ansible_ssh_user='root'
ansible_ssh_pass='centos'
ansible_ssh_port='22'

[type2]
172.16.70.[181:182] ansible_user='root' ansible_pass='centos' ansible_port='22'

[root@wenCheng ~]# vim /etc/ansible/ansible.cfg
……
host_key_checking = False    # 首次连接是否需要检查key认证,取消注释以禁用主机的ssh的密钥检查

新建yaml文件

[root@wenCheng ~]# cat /root/ssh_key.yaml

  • hosts: all    # 远程主机组 tasks:
    • name: send id_rsa.pub
      authorized_key: user=root key="{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"  # 被控制的远程服务上的用户名 本机的公钥地址

执行批量公钥分发

[root@wenCheng ~]# ansible-playbook ssh_key.yaml

PLAY [all] ********************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************
ok: [172.16.70.182]
ok: [172.16.70.181]

TASK [send id_rsa.pub] ********************************************************************************************************************
ok: [172.16.70.181]
ok: [172.16.70.182]

PLAY RECAP ********************************************************************************************************************************
172.16.70.181 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.16.70.182 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

验证结果

[root@wenCheng ~]# ansible all -m command -a "hostname"
172.16.70.182 | CHANGED | rc=0 >>
Server-02
172.16.70.181 | CHANGED | rc=0 >>
Server-01

[root@wenCheng ~]# ansible all -m shell -a "hostname"
172.16.70.182 | CHANGED | rc=0 >>
Server-02
172.16.70.181 | CHANGED | rc=0 >>
Server-01

command模块不支持管道

[root@wenCheng ~]# ansible all -m command -a "cat /etc/passwd| grep centos"
172.16.70.181 | FAILED | rc=1 >>
cat: /etc/passwd|: No such file or directory
cat: grep: No such file or directory
cat: centos: No such file or directorynon-zero return code
172.16.70.182 | FAILED | rc=1 >>
cat: /etc/passwd|: No such file or directory
cat: grep: No such file or directory
cat: centos: No such file or directorynon-zero return code

[root@wenCheng ~]# ansible all -m shell -a "cat /etc/passwd| grep centos"
172.16.70.182 | CHANGED | rc=0 >>
centos❌1000:1000::/home/centos:/bin/bash
172.16.70.181 | CHANGED | rc=0 >>
centos❌1000:1000::/home/centos:/bin/bash

情景二:管理机批量安装软件。

  • yum 模块: RedHat和CentOS的软件包安装和管理工具。

参数:
config_file:yum的配置文件 (optional)
disable_gpg_check:关闭gpg_check (optional)
disablerepo:不启用某个源 (optional)
enablerepo:启用某个源(optional)
name:要进行操作的软件包的名字,默认最新的程序包,指明要安装的程序包,可以带上版本号,也可以传递一个url或者一个本地的rpm包的路径
state:表示是安装还是卸载的状态, 其中present、installed、latest 表示安装, absent 、removed表示卸载删除; present默认状态, laster表示安装最新版本.

安装rsync:
[root@wenCheng ~]# ansible all -m yum -a "name=rsync state=present"

[root@wenCheng ~]# ansible all -m yum -a "name=http://mirror.centos.org/centos/7/os/x86_64/Packages/rsync-3.1.2-10.el7.x86_64.rpm state=present"

卸载rsync:
[root@wenCheng ~]# ansible all -m yum -a "name=rsync state=removed"

情景三:管理机批量分发文件/目录。

  • synchronize 模块: 使用rsync同步文件,将主控方目录推送到指定节点的目录下。

参数:
delete: 删除不存在的文件,delete=yes 使两边的内容一样(即以推送方为主),默认no
src: 要同步到目的地的源主机上的路径; 路径可以是绝对的或相对的。如果路径使用”/”来结尾,则只复制目录里的内容,如果没有使用”/”来结尾,则包含目录在内的整个内容全部复制
dest:目的地主机上将与源同步的路径; 路径可以是绝对的或相对的。
dest_port:默认目录主机上的端口 ,默认是22,走的ssh协议。
mode: push或pull,默认push,一般用于从本机向远程主机上传文件,pull 模式用于从远程主机上取文件。
rsync_opts:通过传递数组来指定其他rsync选项。

接情景二环境,并创建所需文件/目录

[root@wenCheng ~]# tree /tmp/
/tmp/
├── dir_ansible1
│ └── 1
├── dir_ansible2
│ └── 2
├── dir_ansible3
│ └── 3
├── dir_ansible4
│ └── 4
├── file_ansible1
├── file_ansible2
├── file_ansible3
└── file_ansible4
4 directories, 8 files

推送文件/tmp/file_ansible1到远程主机目录/tmp下

[root@wenCheng ~]# ansible all -m synchronize -a 'src=/tmp/file_ansible1 dest=/tmp'

推送文件/tmp/file_ansible2到远程主机目录并覆盖原文件/tmp/file_ansible1

[root@wenCheng ~]# ansible all -m synchronize -a 'src=/tmp/file_ansible2 dest=/tmp/file_ansible1'

推送目录/tmp/dir_ansible1到远程主机目录/tmp下(保留远程主机原/tmp内容不变再新增dir_ansible1目录)

[root@wenCheng ~]# ansible all -m synchronize -a 'src=/tmp/dir_ansible1 dest=/tmp'

推送目录/tmp/的所有文件或目录到远程主机目录/tmp下,使内容一致,默认delete=no(删除远程主机原/tmp内容再同步推送的目录)

[root@wenCheng ~]# ansible all -m synchronize -a "src=/tmp/ dest=/tmp delete=yes"

拉取远程主机文件/etc/hostname到本地目录/tmp

[root@wenCheng ~]# ansible all -m synchronize -a "src=/etc/hostname dest=/tmp rsync_opts='-a' mode=pull"

  • copy 模块: 在远程主机执行复制操作文件。

把主控节点本地的文件上传同步到远程受控节点上, 该模块不支持从远程受控节点拉取文件到主控节点上

参数:
src:指定源文件路径,可以是相对路径,也可以是绝对路径,可以是目录(并非是必须的,可以使用content,直接生成文件内容). src即是要复制到远程主机的文件在本地的地址,可以是绝对路径,也可以是相对路径。
    如果路径是一个目录,它将递归复制。在这种情况下,如果路径使用”/”来结尾,则只复制目录里的内容,如果没有使用”/”来结尾,则包含目录在内的整个内容全部复制,类似于rsync。
dest:指定目标文件路径,只能是绝对路径,如果src是目录,此项必须是目录. 这个是必选项!
owner:指定属主;
group:指定属组;
mode:指定权限,可以以数字指定比如0644;
content:代替src,直接往dest文件中写内容,可以引用变量,也可以直接使用inventory中的主机变量. 写后会覆盖原文件内容!
backup:在覆盖之前将原文件备份,备份文件包含时间信息。有两个选项:yes|no
force: 如果目标主机包含该文件,但内容不同,如果设置为yes,则强制覆盖,如果为no,则只有当目标主机的目标位置不存在该文件时,才复制。默认为yes ;
directory_mode:递归的设定目录的权限,默认为系统默认权限;
others:所有的file模块里的选项都可以在这里使用;

特别注意: src和content不能同时使用。

拷贝本地目录/tmp/dir_ansible1至远程主机目录/tmp

[root@wenCheng ~]# ansible all -m copy -a 'src=/tmp/dir_ansible1 dest=/tmp backup=yes'

拷贝本地文件/tmp/file_ansible1至远程主机目录/tmp,并修改属组为centos,权限为400

[root@wenCheng ~]# ansible all -m copy -a 'src=/tmp/file_ansible1 dest=/tmp group=centos mode=400'

synchronize模块与copy模块区别:

  • copy 模块不支持从远端到本地的拉去操作,fetch 模块支持,但是 src 参数不支持目录递归,只能回传具体文件;
  • copy 模块的 remote_src 参数是指定从远端服务器上往远端服务器上复制,相当于在 shell 模块中执行 copy 命令;
  • synchronize 则支持文件下发和回传,分别对应的 push 和 pull 模式。synchronize 模块的功能依赖于 rsync,但是功能不依赖于 rsync 配置文件中定义的模块;
  • copy 模块适用于小规模文件操作,synchronize 支持大规模文件操作

附: Ansible默认配置解析:

[root@wenCheng ~]# cat /etc/ansible/ansible.cfg
……
[defaults]

some basic default values…

#inventory = /etc/ansible/hosts            # 资源清单inventory文件的位置,脚本或连接管理主机列表
#library = /usr/share/my_modules/          # 库文件存放目录
#module_utils = /usr/share/my_module_utils/       # 模块存放目录
#remote_tmp = ~/.ansible/tmp              # 临时文件远程主机存放目录
#local_tmp = ~/.ansible/tmp              # 临时文件本地存放目录
#plugin_filters_cfg = /etc/ansible/plugin_filters.yml  # 拒绝模块的配置文件  
#forks = 5          # 默认开启的并发数
#poll_interval = 15          # 默认轮询的时间间隔
#sudo_user = root         # 默认sudo用户 
#ask_sudo_pass = True         # 是否需要sudo密码
#ask_pass = True         # 是否需要密码    
#transport = smart        # 默认执行智能模式
#remote_port = 22          # 默认ssh远程端口
#module_lang = C          # 默认模块和系统之间通信的计算机语言,默认为'C'语言
#module_set_locale = False      # 默认设置本地环境变量

plays will gather facts by default, which contain information about

the remote system.

smart - gather by default, but don't regather if already gathered

implicit - gather by default, turn off with gather_facts: False

explicit - do not gather by default, must say gather_facts: True

#gathering = implicit

This only affects the gathering done by a play's gather_facts directive,

by default gathering retrieves all facts subsets

all - gather all subsets

network - gather min and network facts

hardware - gather hardware facts (longest facts to retrieve)

virtual - gather min and virtual facts

facter - import facts from facter

ohai - import facts from ohai

You can combine them using comma (ex: network,virtual)

You can negate them using ! (ex: !hardware,!facter,!ohai)

A minimal set of facts is always gathered.

#gather_subset = all

some hardware related facts are collected

with a maximum timeout of 10 seconds. This

option lets you increase or decrease that

timeout to something more suitable for the

environment.

gather_timeout = 10  # 收集一些与硬件相关的信息,允许根据系统情况来设置超时时间

Ansible facts are available inside the ansible_facts.* dictionary

namespace. This setting maintains the behaviour which was the default prior

to 2.5, duplicating these variables into the main namespace, each with a

prefix of 'ansible_'.

This variable is set to True by default for backwards compatibility. It

will be changed to a default of 'False' in a future release.

ansible_facts.

inject_facts_as_vars = True  # 设置为True是为了向后兼容,为了维护2.5之前的默认行为

additional paths to search for roles in, colon separated

#roles_path = /etc/ansible/roles      # 搜索角色的其它路径,冒号分隔

uncomment this to disable SSH key host checking

#host_key_checking = False        # 首次连接是否需要检查key认证,取消注释以禁用主机的ssh的密钥检查

change the default callback, you can only have one 'stdout' type enabled at a time.

#stdout_callback = skippy  # 更改默认回调的类型

Ansible ships with some plugins that require whitelisting,

this is done to avoid running all of a type by default.

These setting lists those that you want enabled for your system.

Custom plugins should not need this unless plugin author specifies it.

enable callback plugins, they can output to stdout but cannot be 'stdout' type.

#callback_whitelist = timer, mail  # 回调插件白名单,限制默认插件自动调用。如果是自定义插件则不需要

Determine whether includes in tasks and handlers are "static" by

default. As of 2.0, includes are dynamic by default. Setting these

values to True will make includes behave more like they did in the

1.x versions.  # 默认情况下,tasks和handlers是静态。从2.0开始默认是动态

#task_includes_static = False
#handler_includes_static = False

Controls if a missing handler for a notification event is an error or a warning

#error_on_missing_handler = True  # 如果处理程序丢失是错误还是警告

change this for alternative sudo implementations

#sudo_exe = sudo

What flags to pass to sudo

WARNING: leaving out the defaults might create unexpected behaviours  

#sudo_flags = -H -S -n  # 传递给sudo的标志,这里如果省略默认值可能会报错

SSH timeout

#timeout = 10      # 默认SSH超时时间

default user to use for playbooks if user is not specified

(/usr/bin/ansible will use current user as default)

#remote_user = root  # /usr/bin/Ansible属于哪个用户,如果没有给定,那么属于playbook

logging is off by default unless this path is defined

if so defined, consider logrotate

#log_path = /var/log/ansible.log      # 执行日志存放目录

default module name for /usr/bin/ansible

#module_name = command      # 默认执行的模块

use this shell for commands executed under sudo

you may need to change this to bin/bash in rare instances

if sudo is constrained

#executable = /bin/sh

if inventory variables overlap, does the higher precedence one win

or are hash values merged together? The default is 'replace' but

this can also be set to 'merge'.

#hash_behaviour = replace  # 如果inventory变量重叠,优先级越高的会被使用

by default, variables from roles will be visible in the global variable

scope. To prevent this, the following option can be enabled, and only

tasks and handlers within the role will see the variables there

#private_role_vars = yes  # 默认情况下,角色中的变量将在全局变量中可见

list any Jinja2 extensions to enable here:

#jinja2_extensions = jinja2.ext.do,jinja2.ext.i18n  # Jinjia2所有可用的扩展名

if set, always use this private key file for authentication, same as

if passing --private-key to ansible or ansible-playbook

#private_key_file = /path/to/file  # 使用私钥文件进行身份验证,私钥的存储位置

If set, configures the path to the Vault password file as an alternative to

specifying --vault-password-file on the command line.

#vault_password_file = /path/to/vault_password_file  # 如果设置,则配置Vault密码文件的路径,以替代在命令行上指定--vault-password-file

format of string {{ ansible_managed }} available within Jinja2

templates indicates to users editing templates files will be replaced.

replacing {file}, {host} and {uid} and strftime codes with proper values.

#ansible_managed = Ansible managed: {file} modified on %Y-%m-%d %H:%M:%S by {uid} on {host}

{file}, {host}, {uid}, and the timestamp can all interfere with idempotence

in some situations so the default is a static string:

#ansible_managed = Ansible managed

by default, ansible-playbook will display "Skipping [host]" if it determines a task

should not be run on a host. Set this to "False" if you don't want to see these "Skipping"

messages. NOTE: the task header will still be shown regardless of whether or not the

task is skipped.

#display_skipped_hosts = True
  # 默认情况下,如果确定不应该在主机上运行任务,则ansible-playbook将显示Skipping [host],如果你不想看到这条消息,将其设置为False

by default, if a task in a playbook does not include a name: field then

ansible-playbook will construct a header that includes the task's action but

not the task's args. This is a security feature because ansible cannot know

if the *module* considers an argument to be no_log at the time that the

header is printed. If your environment doesn't have a problem securing

stdout from ansible-playbook (or you have manually specified no_log in your

playbook on all of the tasks where you have secret information) then you can

safely set this to True to get more informative messages.

#display_args_to_stdout = False

by default (as of 1.3), Ansible will raise errors when attempting to dereference

Jinja2 variables that are not set in templates or action lines. Uncomment this line

to revert the behavior to pre-1.3.

#error_on_undefined_vars = False

by default (as of 1.6), Ansible may display warnings based on the configuration of the

system running ansible itself. This may include warnings about 3rd party packages or

other conditions that should be resolved if possible.

to disable these warnings, set the following value to False:

#system_warnings = True

by default (as of 1.4), Ansible may display deprecation warnings for language

features that should no longer be used and will be removed in future versions.

to disable these warnings, set the following value to False:

#deprecation_warnings = True

(as of 1.8), Ansible can optionally warn when usage of the shell and

command module appear to be simplified by using a default Ansible module

instead. These warnings can be silenced by adjusting the following

setting or adding warn=yes or warn=no to the end of the command line

parameter string. This will for example suggest using the git module

instead of shelling out to the git command.

command_warnings = False

set plugin path directories here, separate with colons  # 插件的存储位置,ansible将会自动执行下面的插件

#action_plugins = /usr/share/ansible/plugins/action      
#become_plugins = /usr/share/ansible/plugins/become
#cache_plugins = /usr/share/ansible/plugins/cache
#callback_plugins = /usr/share/ansible/plugins/callback
#connection_plugins = /usr/share/ansible/plugins/connection
#lookup_plugins = /usr/share/ansible/plugins/lookup
#inventory_plugins = /usr/share/ansible/plugins/inventory
#vars_plugins = /usr/share/ansible/plugins/vars
#filter_plugins = /usr/share/ansible/plugins/filter
#test_plugins = /usr/share/ansible/plugins/test
#terminal_plugins = /usr/share/ansible/plugins/terminal
#strategy_plugins = /usr/share/ansible/plugins/strategy

by default, ansible will use the 'linear' strategy but you may want to try

another one

#strategy = free  # 默认情况下,ansible将使用“linear”策略

by default callbacks are not loaded for /bin/ansible, enable this if you

want, for example, a notification or logging callback to also apply to

/bin/ansible runs

#bin_ansible_callbacks = False
  # 默认情况下没有为/bin/ansible加载回调,如果你想要启用它将其设置为True

don't like cows? that's unfortunate.

set to 1 if you don't want cowsay support or export ANSIBLE_NOCOWS=1

#nocows = 1  # 如果您不想要cowsay支持或导出ANSIBLE_NOCOWS = 1,则设置为1

set which cowsay stencil you'd like to use by default. When set to 'random',

a random stencil will be selected for each task. The selection will be filtered

against the `cow_whitelist` option below.

#cow_selection = default
#cow_selection = random

when using the 'random' option for cowsay, stencils will be restricted to this list.

it should be formatted as a comma-separated list with no spaces between names.

NOTE: line continuations here are for formatting purposes only, as the INI parser

in python does not support them.

#cow_whitelist=bud-frogs,bunny,cheese,daemon,default,dragon,elephant-in-snake,elephant,eyes,\

hellokitty,kitty,luke-koala,meow,milk,moofasa,moose,ren,sheep,small,stegosaurus,\

stimpy,supermilker,three-eyes,turkey,turtle,tux,udder,vader-koala,vader,www

don't like colors either?

set to 1 if you don't want colors, or export ANSIBLE_NOCOLOR=1

#nocolor = 1

if set to a persistent type (not 'memory', for example 'redis') fact values

from previous runs in Ansible will be stored. This may be useful when

wanting to use, for example, IP information from one group of servers

without having to talk to them in the same playbook run to get their

current IP information.

#fact_caching = memory      # fact缓存的存储类型。如果存储在memory那么只是暂时的,你可以将其存储在文件或者数据库中

#This option tells Ansible where to cache facts. The value is plugin dependent.
#For the jsonfile plugin, it should be a path to a local directory.
#For the redis plugin, the value is a host:port:database triplet: fact_caching_connection = localhost:6379:0

#fact_caching_connection=/tmp  # fact缓存的存储路径

retry files

When a playbook fails a .retry file can be created that will be placed in ~/

You can enable this feature by setting retry_files_enabled to True

and you can change the location of the files by setting retry_files_save_path

#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry      # 默认情况下,当playbook执行失败时,将在~/创建.retry文件

squash actions

Ansible can optimise actions that call modules with list parameters

when looping. Instead of calling the module once per with_ item, the

module is called once with all items at once. Currently this only works

under limited circumstances, and only with parameters named 'name'.

#squash_actions = apk,apt,dnf,homebrew,pacman,pkgng,yum,zypper

prevents logging of task data, off by default

#no_log = False  # Ansible可以优化在循环时使用列表参数调用模块的操作

prevents logging of tasks, but only on the targets, data is still logged on the master/controller

#no_target_syslog = False  # 防止记录任务,但仅在目标上,数据仍记录在主/控制器上

controls whether Ansible will raise an error or warning if a task has no

choice but to create world readable temporary files to execute a module on

the remote machine. This option is False by default for security. Users may

turn this on to have behaviour more like Ansible prior to 2.1.x. See

https://docs.ansible.com/ansible/become.html#becoming-an-unprivileged-user

for more secure ways to fix this than enabling this option.

#allow_world_readable_tmpfiles = False
  # 控制Ansible是否会引发错误或警告,如果任务别无选择,只能创建可读的临时文件来执行远程计算机上的模块。对于安全性,默认情况下此选项为False

controls the compression level of variables sent to

worker processes. At the default of 0, no compression

is used. This value must be an integer from 0 to 9.

#var_compression_level = 9  # 控制发送到工作进程的变量的压缩级别。 默认值为0时,不使用压缩。 该值必须是0到9之间的整数

controls what compression method is used for new-style ansible modules when

they are sent to the remote system. The compression types depend on having

support compiled into both the controller's python and the client's python.

The names should match with the python Zipfile compression types:

* ZIP_STORED (no compression. available everywhere)

* ZIP_DEFLATED (uses zlib, the default)

These values may be set per host via the ansible_module_compression inventory

variable

#module_compression = 'ZIP_DEFLATED'  # 控制将ansible模块发送到远程系统时使用的压缩方法

This controls the cutoff point (in bytes) on --diff for files

set to 0 for unlimited (RAM may suffer!).

#max_diff_size = 1048576
  # 这将控制文件的--diff的截止点(以字节为单位),设置为0表示无限制(RAM可能会受损!)

This controls how ansible handles multiple --tags and --skip-tags arguments

on the CLI. If this is True then multiple arguments are merged together. If

it is False, then the last specified argument is used and the others are ignored.

This option will be removed in 2.8.

#merge_multiple_cli_flags = True
  # 这将控制ansible如何在CLI上处理多个--tags和--skip-tags参数。如果这是True,则将多个参数合并在一起。如果为False,则使用最后指定的参数,并忽略其他参数

Controls showing custom stats at the end, off by default

#show_custom_stats = True  # 最后显示自定义统计信息的控件,默认情况下已关闭

Controls which files to ignore when using a directory as inventory with

possibly multiple sources (both static and dynamic)

#inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo
  # 控制将目录用作具有可能多个源(静态和动态)的库存时要忽略的文件

This family of modules use an alternative execution path optimized for network appliances

only update this setting if you know how this works, otherwise it can break module execution

#network_group_modules=eos, nxos, ios, iosxr, junos, vyos
  # 此系列模块使用针对网络设备优化的替代执行路径,只有在您了解其工作原理的情况下才更新此设置,否则会破坏模块执行

When enabled, this option allows lookups (via variables like {{lookup('foo')}} or when used as

a loop with `with_foo`) to return data that is not marked "unsafe". This means the data may contain

jinja2 templating language which will be run through the templating engine.

ENABLING THIS COULD BE A SECURITY RISK

#allow_unsafe_lookups = False
  #启用时,此选项允许查找(通过{{lookup('foo')}}之类的变量或当用作带有“with_foo”的循环时)返回未标记为“不安全”的数据

set default errors for all plays

#any_errors_fatal = False    # 为所有的操作设置默认错误

[inventory]

enable inventory plugins, default: 'host_list', 'script', 'auto', 'yaml', 'ini', 'toml'

#enable_plugins = host_list, virtualbox, yaml, constructed  # 默认启动的插件

ignore these extensions when parsing a directory as inventory source

#ignore_extensions = .pyc, .pyo, .swp, .bak, ~, .rpm, .md, .txt, ~, .orig, .ini, .cfg, .retry  # 在将目录解析为库存源时忽略这些扩展

ignore files matching these patterns when parsing a directory as inventory source

#ignore_patterns=    # 在将目录解析为库存源时忽略与这些模式匹配的文件

If 'true' unparsed inventory sources become fatal errors, they are warnings otherwise.

#unparsed_is_failed=False    # 如果'true'未解析的库存来源成为致命错误,则会发出警告

[privilege_escalation]  # 权限提升设置
#become=True
#become_method=sudo
#become_user=root
#become_ask_pass=False

[paramiko_connection]  # 该部分功能不常用,了解即可。

uncomment this line to cause the paramiko connection plugin to not record new host

keys encountered. Increases performance on new host additions. Setting works independently of the

host key checking setting above.

#record_host_keys=False      # 不记录新主机的Key,以提示效率

by default, Ansible requests a pseudo-terminal for commands executed under sudo. Uncomment this

line to disable this behaviour.

#pty=False      # 禁用sudo功能, 取消注释此行以禁用此行为

paramiko will default to looking for SSH keys initially when trying to

authenticate to remote devices. This is a problem for some network devices

that close the connection after a key failure. Uncomment this line to

disable the Paramiko look for keys function

#look_for_keys = False  # 默认初始查找SSH密钥,取消注释此行以禁用Paramiko查找键功能

When using persistent connections with Paramiko, the connection runs in a

background process. If the host doesn't already have a valid SSH key, by

default Ansible will prompt to add the host key. This will cause connections

running in background processes to fail. Uncomment this line to have

Paramiko automatically add host keys.

#host_key_auto_add = True  # 默认提示首次添加主机密钥,取消注释此行以使Paramiko自动添加主机密钥

[ssh_connection]  # Ansible默认使用SSH协议连接对端主机,该部署是主要是SSH连接的一些配置,但配置项较少,多数默认即可。

ssh arguments to use

Leaving off ControlPersist will result in poor performance, so use

paramiko on older platforms rather than removing it, -C controls compression use

#ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
  # 要使用的ssh参数离开ControlPersist会导致性能不佳,所以在较旧的平台上使用paramiko而不是删除它,-C控制压缩使用

The base directory for the ControlPath sockets.

This is the "%(directory)s" in the control_path option

Example:

control_path_dir = /tmp/.ansible/cp

#control_path_dir = ~/.ansible/cp

The path to use for the ControlPath sockets. This defaults to a hashed string of the hostname,

port and username (empty string in the config). The hash mitigates a common problem users

found with long hostnames and the conventional %(directory)s/ansible-ssh-%%h-%%p-%%r format.

In those cases, a "too long for Unix domain socket" ssh error would occur.

Example:

control_path = %(directory)s/%%h-%%r  # 用于ControlPath套接字的路径。 默认为主机名,端口和用户名的散列字符串(配置中为空字符串)

#control_path =

Enabling pipelining reduces the number of SSH operations required to

execute a module on the remote server. This can result in a significant

performance improvement when enabled, however when using "sudo:" you must

first disable 'requiretty' in /etc/sudoers

By default, this option is disabled to preserve compatibility with

sudoers configurations that have requiretty (the default on many distros).

#pipelining = False   # 默认情况下,禁用此选项以保持兼容性,sudoers配置requiretty(许多发行版的默认设置)。

Control the mechanism for transferring files (old)

* smart = try sftp and then try scp [default]

* True = use scp only

* False = use sftp only

#scp_if_ssh = smart  # 控制传输文件的机制(旧)smart|True|False

Control the mechanism for transferring files (new)

If set, this will override the scp_if_ssh option

* sftp = use sftp to transfer files

* scp = use scp to transfer files

* piped = use 'dd' over SSH to transfer files

* smart = try sftp, scp, and piped, in that order [default]

#transfer_method = smart  # 控制传输文件的机制(新) sftp|scp|piped|smart

if False, sftp will not use batch mode to transfer files. This may cause some

types of file transfer failures impossible to catch however, and should

only be disabled if your sftp version has problems with batch mode

#sftp_batch_mode = False  # False为sftp将不使用批处理模式传输文件,并且只有在sftp版本的批处理模式出现问题时才应禁用

The -tt argument is passed to ssh when pipelining is not enabled because sudo

requires a tty by default.

#usetty = True   # 未启用管道传输时,-tt参数将传递给ssh,因为默认情况下sudo需要tty

Number of times to retry an SSH connection to a host, in case of UNREACHABLE.

For each retry attempt, there is an exponential backoff,

so after the first attempt there is 1s wait, then 2s, 4s etc. up to 30s (max).

#retries = 3    # 重试与主机的SSH连接的次数

[persistent_connection]

Configures the persistent connection timeout value in seconds. This value is

how long the persistent connection will remain idle before it is destroyed.

If the connection doesn't receive a request before the timeout value

expires, the connection is shutdown. The default value is 30 seconds.

#connect_timeout = 30  # 持久连接超时时间,单位秒

The command timeout value defines the amount of time to wait for a command

or RPC call before timing out. The value for the command timeout must

be less than the value of the persistent connection idle timeout (connect_timeout)

The default value is 30 second.

#command_timeout = 30  # 命令超时时间,必须小持于久连接空闲超时的时间,单位秒

[accelerate]    # 该配置项在提升Ansibile连接速度时会涉及,多数保持默认即可。
#accelerate_port = 5099        # 加速连接端口
#accelerate_timeout = 30        # 命令执行超时时间,单位秒
#accelerate_connect_timeout = 5.0   # 连接超时时间,单位秒

The daemon timeout is measured in minutes. This time is measured

from the last activity to the accelerate daemon.

#accelerate_daemon_timeout = 30    # 上一个活动连接的时间,单位分钟

If set to yes, accelerate_multi_key will allow multiple

private keys to be uploaded to it, though each user must

have access to the system via SSH to add a new key. The default

is "no".

#accelerate_multi_key = yes   # 允许通过SSH使用多个私钥 

[selinux]     # 关于selinux的相关配置几乎不会涉及,保持默认配置即可。

file systems that require special treatment when dealing with security context

the default behaviour that copies the existing context or uses the user default

needs to be changed to use the file system dependent context.

#special_context_filesystems=nfs,vboxsf,fuse,ramfs,9p,vfat

Set this to yes to allow libvirt_lxc connections to work without SELinux.

#libvirt_lxc_noseclabel = yes

[colors]      # Ansible对于输出结果的颜色也进行了详尽的定义且可配置,该选项对日常功能应用影响不大,几乎不用修改
#highlight = white
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow
#diff_add = green
#diff_remove = red
#diff_lines = cyan

[diff]

Always print diff when running ( same as always running with -D/--diff )

always = no    # 在运行时始终打印diff(与使用-D / - diff 运行相同)

Set how many context lines to show in diff

context = 3    # 设置要在diff中显示的上下文行数

手机扫一扫

移动阅读更方便

阿里云服务器
腾讯云服务器
七牛云服务器

你可能感兴趣的文章