Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Sign in
Toggle navigation
Menu
Open sidebar
Christopher Hauser
OpenStack-Cloud
Commits
c2453760
Commit
c2453760
authored
Sep 18, 2014
by
Jan Siersch
Browse files
copy of old project
parent
46940a0f
Changes
53
Hide whitespace changes
Inline
Side-by-side
README.md
0 → 100644
View file @
c2453760
# Introduction
This repository documents the installation of an openstack cluster for the institute of information resource management.
The software used is _Ubuntu 12.04.4 LTS_ with _Openstack Havana_. The official documentation can be found here:
http://docs.openstack.org/havana/install-guide/install/apt/content/
## Cluster setup
The openstack cluster currently consists of one controller node, one network node and multiple compute nodes.
The __controller node__ contains the following services:
*
__MySQL__ stores information about openstack services
*
__Apache__ offers access to mysql by phpmyadmin and to the openstack cluster by horizon
*
__Rabbitmq__ message queue server used by openstack
*
__MongoDB__ stores usage data from telemetry service ceilometer
*
__Openstack Keystone__ identity management for users, groups and services defined in openstack
*
__Openstack Glance__ image manager for os images new vms can boot from
*
__Openstack Nova__ computation controller for running vms
*
__Openstack Neutron Server__ connector between networking and computing services
*
__Openstack Horizon__ openstack dashboard, accessed by apache
*
__Openstack Cinder__ block storage manager for volumes
further tasks (not installed yet):
*
__Openstack Swift__ object storage manager
*
__Openstack Heat__ orchestration of instances and networks
The __network node__ contains the following services:
*
__Openstack Neutron__ network manager controller
The __compute node__ contains the following services:
*
__Openstack Nova__ computation execution
*
__Openstack Cinder__ block storage persistence
*
__Openstack Neutron Agent__ agent for network manager
further tasks (not installed yet):
*
__Openstack Swift__ object storage persistence
The setup of nodes and networks is shown in the following figure. The controller node is named clusterlab-ctrl01,
the network node is named clusterlab-ctrl02. Compute nodes are named clusterlab01 to clusterlabNN. Four networks are used,
of which three are private and one public. The control and data nets in 10.8.0.0/16 have internet access for
outgoing connections but are not reachable from outside. The public net 134.60.30.0/24 is accessible directly from
the internet. Therefore this physically existent net is used by the private virtual net 10.10.0.0/16. This virtual net is only
accessible from the physical omi network and can not access the internet for outgoing connections.

# Repo structure
The subdirectory
[
config
](
../../tree/master/config
)
contains the manually modified configuration files separated by the node type.
For easier installation, the subdirectory
[
installation
](
../../tree/master/installation
)
contains scripts, that install, modify and
place configuration files on the new node.
[
documentation
](
../../tree/master/documentation
)
contains additional documentation
material like graphics and presentations.
\ No newline at end of file
config/compute-node/etc/cinder/api-paste.ini
0 → 100644
View file @
c2453760
#############
# OpenStack #
#############
[composite:osapi_volume]
use
=
call:cinder.api:root_app_factory
/:
apiversions
/v1:
openstack_volume_api_v1
/v2:
openstack_volume_api_v2
[composite:openstack_volume_api_v1]
use
=
call:cinder.api.middleware.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth apiv1
keystone
=
faultwrap sizelimit authtoken keystonecontext apiv1
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext apiv1
[composite:openstack_volume_api_v2]
use
=
call:cinder.api.middleware.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth apiv2
keystone
=
faultwrap sizelimit authtoken keystonecontext apiv2
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext apiv2
[filter:faultwrap]
paste.filter_factory
=
cinder.api.middleware.fault:FaultWrapper.factory
[filter:noauth]
paste.filter_factory
=
cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory
=
cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
[app:apiv1]
paste.app_factory
=
cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory
=
cinder.api.v2.router:APIRouter.factory
[pipeline:apiversions]
pipeline
=
faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory
=
cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory
=
cinder.api.middleware.auth:CinderKeystoneContext.factory
[filter:authtoken]
paste.filter_factory
=
keystoneclient.middleware.auth_token:filter_factory
auth_host
=
controller
auth_port
=
35357
auth_protocol
=
http
admin_tenant_name
=
service
admin_user
=
cinder
admin_password
=
${CINDER_PASS}
config/compute-node/etc/cinder/cinder.conf
0 → 100644
View file @
c2453760
[
DEFAULT
]
rootwrap_config
= /
etc
/
cinder
/
rootwrap
.
conf
api_paste_confg
= /
etc
/
cinder
/
api
-
paste
.
ini
iscsi_helper
=
tgtadm
volume_name_template
=
volume
-%
s
volume_group
=
cinder
-
volumes
verbose
=
True
auth_strategy
=
keystone
state_path
= /
var
/
lib
/
cinder
lock_path
= /
var
/
lock
/
cinder
volumes_dir
= /
var
/
lib
/
cinder
/
volumes
rpc_backend
=
cinder
.
openstack
.
common
.
rpc
.
impl_kombu
rabbit_host
=
controller
rabbit_port
=
5672
rabbit_userid
=
guest
rabbit_password
= ${
RABBIT_PASS
}
glance_host
=
controller
[
database
]
connection
=
mysql
://
cinder
:${
CINDER_DBPASS
}@
controller
/
cinder
config/compute-node/etc/kernel/postinst.d/statoverride
0 → 100644
View file @
c2453760
#!/bin/sh
version
=
"
$1
"
# passing the kernel version is required
[
-z
"
${
version
}
"
]
&&
exit
0
dpkg-statoverride
--update
--add
root root 0644 /boot/vmlinuz-
${
version
}
config/compute-node/etc/neutron/api-paste.ini
0 → 100644
View file @
c2453760
[composite:neutron]
use
=
egg:Paste#urlmap
/:
neutronversions
/v2.0:
neutronapi_v2_0
[composite:neutronapi_v2_0]
use
=
call:neutron.auth:pipeline_factory
noauth
=
extensions neutronapiapp_v2_0
keystone
=
authtoken keystonecontext extensions neutronapiapp_v2_0
[filter:keystonecontext]
paste.filter_factory
=
neutron.auth:NeutronKeystoneContext.factory
[filter:authtoken]
paste.filter_factory
=
keystoneclient.middleware.auth_token:filter_factory
auth_host
=
controller
admin_tenant_name
=
service
admin_user
=
neutron
admin_password
=
${NEUTRON_PASS}
[filter:extensions]
paste.filter_factory
=
neutron.api.extensions:plugin_aware_extension_middleware_factory
[app:neutronversions]
paste.app_factory
=
neutron.api.versions:Versions.factory
[app:neutronapiapp_v2_0]
paste.app_factory
=
neutron.api.v2.router:APIRouter.factory
config/compute-node/etc/neutron/neutron.conf
0 → 100644
View file @
c2453760
[
DEFAULT
]
# debug = False
# verbose = False
# Where to store Neutron state files. This directory must be writable by the
# user executing the agent.
state_path
= /
var
/
lib
/
neutron
# Where to store lock files
lock_path
= /
var
/
lib
/
neutron
/
lock
# Neutron plugin provider module
core_plugin
=
neutron
.
plugins
.
openvswitch
.
ovs_neutron_plugin
.
OVSNeutronPluginV2
rpc_backend
=
neutron
.
openstack
.
common
.
rpc
.
impl_kombu
rabbit_host
=
controller
rabbit_port
=
5672
rabbit_password
= ${
RABBIT_PASS
}
rabbit_userid
=
guest
auth_host
=
controller
admin_tenant_name
=
service
admin_user
=
neutron
admin_password
= ${
NEUTRON_PASS
}
auth_url
=
http
://
controller
:
35357
/
v2
.
0
auth_strategy
=
keystone
notification_driver
=
neutron
.
openstack
.
common
.
notifier
.
rpc_notifier
[
quotas
]
[
agent
]
root_helper
=
sudo
/
usr
/
bin
/
neutron
-
rootwrap
/
etc
/
neutron
/
rootwrap
.
conf
[
keystone_authtoken
]
auth_host
=
controller
auth_port
=
35357
auth_protocol
=
http
admin_tenant_name
=
service
admin_user
=
neutron
admin_password
= ${
NEUTRON_PASS
}
[
database
]
connection
=
mysql
://
neutron
:${
NEUTRON_DBPASS
}@
controller
/
neutron
[
service_providers
]
service_provider
=
LOADBALANCER
:
Haproxy
:
neutron
.
services
.
loadbalancer
.
drivers
.
haproxy
.
plugin_driver
.
HaproxyOnHostPluginDriver
:
default
config/compute-node/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
0 → 100644
View file @
c2453760
[ovs]
tenant_network_type
=
gre
tunnel_id_ranges
=
1:1000
enable_tunneling
=
True
integration_bridge
=
br-int
tunnel_bridge
=
br-tun
local_ip
=
${LOCALIP_DATA}
[agent]
[securitygroup]
firewall_driver
=
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
#-----------------------------------------------------------------------------
# Sample Configurations.
#-----------------------------------------------------------------------------
#
# 1. With VLANs on eth1.
# [database]
# connection = mysql://root:nova@127.0.0.1:3306/ovs_neutron
# [OVS]
# network_vlan_ranges = default:2000:3999
# tunnel_id_ranges =
# integration_bridge = br-int
# bridge_mappings = default:br-eth1
# [AGENT]
# Add the following setting, if you want to log to a file
#
# 2. With tunneling.
# [database]
# connection = mysql://root:nova@127.0.0.1:3306/ovs_neutron
# [OVS]
# network_vlan_ranges =
# tunnel_id_ranges = 1:1000
# integration_bridge = br-int
# tunnel_bridge = br-tun
# local_ip = 10.0.0.3
config/compute-node/etc/nova/api-paste.ini
0 → 100644
View file @
c2453760
############
# Metadata #
############
[composite:metadata]
use
=
egg:Paste#urlmap
/:
meta
[pipeline:meta]
pipeline
=
ec2faultwrap logrequest metaapp
[app:metaapp]
paste.app_factory
=
nova.api.metadata.handler:MetadataRequestHandler.factory
#######
# EC2 #
#######
[composite:ec2]
use
=
egg:Paste#urlmap
/services/Cloud:
ec2cloud
[composite:ec2cloud]
use
=
call:nova.api.auth:pipeline_factory
noauth
=
ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
keystone
=
ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor
[filter:ec2faultwrap]
paste.filter_factory
=
nova.api.ec2:FaultWrapper.factory
[filter:logrequest]
paste.filter_factory
=
nova.api.ec2:RequestLogging.factory
[filter:ec2lockout]
paste.filter_factory
=
nova.api.ec2:Lockout.factory
[filter:ec2keystoneauth]
paste.filter_factory
=
nova.api.ec2:EC2KeystoneAuth.factory
[filter:ec2noauth]
paste.filter_factory
=
nova.api.ec2:NoAuth.factory
[filter:cloudrequest]
controller
=
nova.api.ec2.cloud.CloudController
paste.filter_factory
=
nova.api.ec2:Requestify.factory
[filter:authorizer]
paste.filter_factory
=
nova.api.ec2:Authorizer.factory
[filter:validator]
paste.filter_factory
=
nova.api.ec2:Validator.factory
[app:ec2executor]
paste.app_factory
=
nova.api.ec2:Executor.factory
#############
# Openstack #
#############
[composite:osapi_compute]
use
=
call:nova.api.openstack.urlmap:urlmap_factory
/:
oscomputeversions
/v1.1:
openstack_compute_api_v2
/v2:
openstack_compute_api_v2
/v3:
openstack_compute_api_v3
[composite:openstack_compute_api_v2]
use
=
call:nova.api.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone
=
faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2
[composite:openstack_compute_api_v3]
use
=
call:nova.api.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth_v3 ratelimit_v3 osapi_compute_app_v3
keystone
=
faultwrap sizelimit authtoken keystonecontext ratelimit_v3 osapi_compute_app_v3
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v3
[filter:faultwrap]
paste.filter_factory
=
nova.api.openstack:FaultWrapper.factory
[filter:noauth]
paste.filter_factory
=
nova.api.openstack.auth:NoAuthMiddleware.factory
[filter:noauth_v3]
paste.filter_factory
=
nova.api.openstack.auth:NoAuthMiddlewareV3.factory
[filter:ratelimit]
paste.filter_factory
=
nova.api.openstack.compute.limits:RateLimitingMiddleware.factory
[filter:ratelimit_v3]
paste.filter_factory
=
nova.api.openstack.compute.plugins.v3.limits:RateLimitingMiddleware.factory
[filter:sizelimit]
paste.filter_factory
=
nova.api.sizelimit:RequestBodySizeLimiter.factory
[app:osapi_compute_app_v2]
paste.app_factory
=
nova.api.openstack.compute:APIRouter.factory
[app:osapi_compute_app_v3]
paste.app_factory
=
nova.api.openstack.compute:APIRouterV3.factory
[pipeline:oscomputeversions]
pipeline
=
faultwrap oscomputeversionapp
[app:oscomputeversionapp]
paste.app_factory
=
nova.api.openstack.compute.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory
=
nova.api.auth:NovaKeystoneContext.factory
[filter:authtoken]
paste.filter_factory
=
keystoneclient.middleware.auth_token:filter_factory
auth_host
=
controller
auth_port
=
35357
auth_protocol
=
http
admin_tenant_name
=
service
admin_user
=
nova
admin_password
=
${NOVA_PASS}
auth_version
=
v2.0
config/compute-node/etc/nova/nova.conf
0 → 100644
View file @
c2453760
[
DEFAULT
]
dhcpbridge_flagfile
=/
etc
/
nova
/
nova
.
conf
dhcpbridge
=/
usr
/
bin
/
nova
-
dhcpbridge
logdir
=/
var
/
log
/
nova
state_path
=/
var
/
lib
/
nova
lock_path
=/
var
/
lock
/
nova
force_dhcp_release
=
True
iscsi_helper
=
tgtadm
libvirt_use_virtio_for_bridges
=
True
connection_type
=
libvirt
root_helper
=
sudo
nova
-
rootwrap
/
etc
/
nova
/
rootwrap
.
conf
verbose
=
True
ec2_private_dns_show_ip
=
True
api_paste_config
=/
etc
/
nova
/
api
-
paste
.
ini
volumes_path
=/
var
/
lib
/
nova
/
volumes
enabled_apis
=
ec2
,
osapi_compute
,
metadata
auth_strategy
=
keystone
rpc_backend
=
nova
.
rpc
.
impl_kombu
rabbit_host
=
controller
rabbit_password
= ${
ADMIN_PASS
}
my_ip
=${
LOCALIP_CTRL
}
vnc_enabled
=
True
vncserver_listen
=
0
.
0
.
0
.
0
vncserver_proxyclient_address
=${
LOCALIP_CTRL
}
novncproxy_base_url
=
http
://${
CONTROLLERIP_PUBLIC
}:
6080
/
vnc_auto
.
html
glance_host
=
controller
#### NOVA NETWORKING
#network_manager=nova.network.manager.FlatDHCPManager
#firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
#network_size=254
#allow_same_net_traffic=False
#multi_host=True
#send_arp_for_ha=True
#share_dhcp_address=True
#force_dhcp_release=True
#flat_network_bridge=br100
#flat_interface=eth0
#public_interface=eth0
# neutron networking
network_api_class
=
nova
.
network
.
neutronv2
.
api
.
API
neutron_url
=
http
://
controller
:
9696
neutron_auth_strategy
=
keystone
neutron_admin_tenant_name
=
service
neutron_admin_username
=
neutron
neutron_admin_password
=${
NEUTRON_PASS
}
neutron_admin_auth_url
=
http
://
controller
:
35357
/
v2
.
0
linuxnet_interface_driver
=
nova
.
network
.
linux_net
.
LinuxOVSInterfaceDriver
firewall_driver
=
nova
.
virt
.
firewall
.
NoopFirewallDriver
security_group_api
=
neutron
[
database
]
# The SQLAlchemy connection string used to connect to the database
connection
=
mysql
://
nova
:${
NOVA_DBPASS
}@
controller
/
nova
config/compute-node/etc/sysctl.conf
0 → 100644
View file @
c2453760
#
# /etc/sysctl.conf - Configuration file for setting system variables
# See /etc/sysctl.d/ for additional system variables
# See sysctl.conf (5) for information.
#
net
.
ipv4
.
conf
.
all
.
rp_filter
=
0
net
.
ipv4
.
conf
.
default
.
rp_filter
=
0
config/controller-node/README
0 → 100644
View file @
c2453760
config/controller-node/etc/ceilometer/ceilometer.conf
0 → 100755
View file @
c2453760
[
DEFAULT
]
# the filename to use with sqlite (string value)
sqlite_db
=
ceilometer
.
sqlite
# unused because mysql is configured as storage!
# (Optional) The base directory used for relative --log-file
# paths (string value)
log_dir
= /
var
/
log
/
ceilometer
# The RabbitMQ broker address where a single node is used
rabbit_host
=
controller
rabbit_password
= ${
RABBIT_PASS
}
[
publisher_rpc
]
metering_secret
= ${
CEILOMETER_PASS
}
# temporary solution, create separate token
[
ssl
]
[
database
]
connection
=
mongodb
://
ceilometer
:${
CEILOMETER_DBPASS
}@${
CONTROLLERIP_CTRL
}:
27017
/
ceilometer
[
alarm
]
[
rpc_notifier2
]
[
api
]
[
service_credentials
]
s_username
=
ceilometer
os_tenant_name
=
service
os_password
= ${
CEILOMETER_PASS
}
[
dispatcher_file
]
[
keystone_authtoken
]
auth_host
=
controller
auth_port
=
35357
auth_protocol
=
http
auth_uri
=
http
://
controller
:
5000
admin_tenant_name
=
service
admin_user
=
ceilometer
admin_password
= ${
CEILOMETER_PASS
}
[
collector
]
[
matchmaker_ring
]
[
matchmaker_redis
]
config/controller-node/etc/cinder/api-paste.ini
0 → 100755
View file @
c2453760
#############
# OpenStack #
#############
[composite:osapi_volume]
use
=
call:cinder.api:root_app_factory
/:
apiversions
/v1:
openstack_volume_api_v1
/v2:
openstack_volume_api_v2
[composite:openstack_volume_api_v1]
use
=
call:cinder.api.middleware.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth apiv1
keystone
=
faultwrap sizelimit authtoken keystonecontext apiv1
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext apiv1
[composite:openstack_volume_api_v2]
use
=
call:cinder.api.middleware.auth:pipeline_factory
noauth
=
faultwrap sizelimit noauth apiv2
keystone
=
faultwrap sizelimit authtoken keystonecontext apiv2
keystone_nolimit
=
faultwrap sizelimit authtoken keystonecontext apiv2
[filter:faultwrap]
paste.filter_factory
=
cinder.api.middleware.fault:FaultWrapper.factory
[filter:noauth]
paste.filter_factory
=
cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
paste.filter_factory
=
cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
[app:apiv1]
paste.app_factory
=
cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory
=
cinder.api.v2.router:APIRouter.factory
[pipeline:apiversions]
pipeline
=
faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory
=
cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory
=
cinder.api.middleware.auth:CinderKeystoneContext.factory
[filter:authtoken]
paste.filter_factory
=
keystoneclient.middleware.auth_token:filter_factory
auth_host
=
localhost
auth_port
=
35357
auth_protocol
=
http
admin_tenant_name
=
service
admin_user
=
cinder
admin_password
=
${CINDER_PASS}
\ No newline at end of file
config/controller-node/etc/cinder/cinder.conf
0 → 100755
View file @
c2453760
[
DEFAULT
]
rootwrap_config
= /
etc
/
cinder
/
rootwrap
.
conf
api_paste_confg
= /
etc
/
cinder
/
api
-
paste
.
ini
iscsi_helper
=
tgtadm
volume_name_template
=
volume
-%
s
volume_group
=
cinder
-
volumes
verbose
=
True
auth_strategy
=
keystone
state_path
= /
var
/
lib
/
cinder
lock_path
= /
var
/
lock
/
cinder
volumes_dir
= /
var
/
lib
/
cinder
/
volumes
rpc_backend
=
cinder
.
openstack
.
common
.
rpc
.
impl_kombu
rabbit_host
=
localhost
rabbit_port
=
5672
rabbit_userid
=
guest
rabbit_password
= ${
RABBIT_PASS
}
[
database
]
connection
=
mysql
://
cinder
:${
CINDER_DBPASS
}@
localhost
/
cinder
config/controller-node/etc/glance/glance-api-paste.ini
0 → 100755
View file @
c2453760
# Use this pipeline for no auth or image caching - DEFAULT
[pipeline:glance-api]
pipeline
=
versionnegotiation unauthenticated-context rootapp
# Use this pipeline for image caching and no auth
[pipeline:glance-api-caching]
pipeline
=
versionnegotiation unauthenticated-context cache rootapp
# Use this pipeline for caching w/ management interface but no auth
[pipeline:glance-api-cachemanagement]
pipeline
=
versionnegotiation unauthenticated-context cache cachemanage rootapp
# Use this pipeline for keystone auth
[pipeline:glance-api-keystone]
pipeline
=
versionnegotiation authtoken context rootapp
# Use this pipeline for keystone auth with image caching
[pipeline:glance-api-keystone+caching]
pipeline
=
versionnegotiation authtoken context cache rootapp
# Use this pipeline for keystone auth with caching and cache management
[pipeline:glance-api-keystone+cachemanagement]
pipeline
=
versionnegotiation authtoken context cache cachemanage rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-api-trusted-auth]
pipeline
=
versionnegotiation context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user and uses cache management
[pipeline:glance-api-trusted-auth+cachemanagement]
pipeline
=
versionnegotiation context cache cachemanage rootapp
[composite:rootapp]