diff --git a/content/cloud/_index.html b/content/cloud/_index.html new file mode 100644 index 0000000000000000000000000000000000000000..a89f509d05d463c4583a8314dc42cbb8ac89b0f1 --- /dev/null +++ b/content/cloud/_index.html @@ -0,0 +1,3 @@ +--- +title: Cloud +--- diff --git a/content/cloud/about/_index.md b/content/cloud/about/_index.md new file mode 100644 index 0000000000000000000000000000000000000000..839babda7caa898ec7d1365bc0d3cbd50de1f623 --- /dev/null +++ b/content/cloud/about/_index.md @@ -0,0 +1,46 @@ +--- +title: "About metacentrum.cz cloud" +date: 2021-05-18T11:22:35+02:00 +draft: false +--- + +## Hardware +MetaCentrum Cloud consist of 13 computational clusters containing 283 hypervisors +with sum of 9560 cores, 62 GPU cards and 184 TB RAM. For applications with special demands cluster +with local SSDs and GPU cards is available. OpenStack instances, object store and image store +can leverage more than 1.5 PTB highly available capacity provided by CEPH storage system. + +## Software + +MetaCentrum Cloud is built on top of OpenStack, which is a free open standard cloud computing platform +and one of the top 3 most active open source projects in the world. New OpenStack major version is +released twice a year. OpenStack functionality is separated into more than 50 services. + +## Number of usage +More than 400 users are using MetaCentrum Cloud platform and more than 130k VMs were started last year. + +## MetaCentrum Cloud current release + +OpenStack Train + +## Deployed services + +Following table contains list of OpenStack services deployed in MetaCentrum Cloud. Services are separated +into two groups based on their stability and level of support we are able to provide. All services in production +group are well tested by our team and are covered by support of cloud@metacentrum.cz. To be able to support +variety of experimental cases we are planning to deploy several services as experimental, which can be useful +for testing purposes, but it's functionality won't be covered by support of cloud@metacentrum.cz. + +| Service | Description | Type | +|-----------|------------------------|--------------| +| cinder | Block Storage service | production | +| glance | Image service | production | +| heat | Orchestration service | production | +| horizon | Dashboard | production | +| keystone | Identity service | production | +| monasca | Monitoring service | experimental | +| neutron | Networking service | production | +| nova | Compute service | production | +| Octavia | Load Balancing Service | experimental | +| placement | Placement service | production | +| swift/s3 | Object Storage service | production | diff --git a/content/cloud/best-practices/images/accessing-vms-through-jump-host-6-mod.png b/content/cloud/best-practices/images/accessing-vms-through-jump-host-6-mod.png new file mode 100644 index 0000000000000000000000000000000000000000..8b1695378da614bf9daefa7feac485f395a754bc Binary files /dev/null and b/content/cloud/best-practices/images/accessing-vms-through-jump-host-6-mod.png differ diff --git a/content/cloud/best-practices/index.md b/content/cloud/best-practices/index.md new file mode 100644 index 0000000000000000000000000000000000000000..aca2370115b1456c8762e56bbccd06d504855d3e --- /dev/null +++ b/content/cloud/best-practices/index.md @@ -0,0 +1,230 @@ +--- +title: "Best practices" +date: 2021-05-18T11:22:35+02:00 +draft: false +--- + +Following article summarizes effective approaches how to use our cloud. + +## How many public IP addresses I need? +There are [two pools of public IPv4 addresses available](/network/#group-project). +**Unfortunately the amount of available public IPv4 addresses is limited.** Read the details on [how to set-up your project networking](/network/). + +In most cases even when you build huge cloud infrastructure you should be able to access it via few (up-to two) public IP addresses. + + + +There are following project VMs in the cloud architecture: + +| VM name | VM orerating system | VM IP addresses | VM flavor type | VM description | +| :--- | :---: | :-----------: | :------: | :------: | +| freznicek-cos8 | centos-8-x86_64 | 172.16.0.54, 147.251.21.72 (public) | standard.medium | jump host | +| freznicek-ubu | ubuntu-focal-x86_64 | 172.16.1.67 | standard.medium | internal VM | +| freznicek-deb10 | debian-10-x86_64 | 172.16.0.158 | standard.medium | internal VM | +| ... | ... | ... | ... | internal VM | + + + +### Setting-up the VPN tunnel via encrypted SSH with [sshuttle](https://github.com/sshuttle/sshuttle) + +```sh +# terminal A +# Launch tunnel through jump-host VM + +# Install sshuttle +if grep -qE 'ID_LIKE=.*debian' /etc/os-release; then + # on debian like OS + sudo apt-get update + sudo apt-get -y install sshuttle +elif grep -qE 'ID_LIKE=.*rhel' /etc/os-release; then + # on RHEL like systems + sudo yum -y install sshuttle +fi + +# Establish the SSH tunnel (and stay connected) where +# 147.251.21.72 is IP address of example jump-host +# 172.16.0.0/22 is IP subnet where example cloud resources are internally available +sshuttle -r centos@147.251.21.72 172.16.0.0/22 +``` + +### Accessing (hidden) project VMs through the VPN tunnel + +```sh +# terminal B +# Access all VMs allocated in project in 172.16.0.0/22 subnet (a C5 instance shown on picture) +$ ssh debian@172.16.0.158 uname -a +Linux freznicek-deb10 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux + +# Access is not limited to any protocol, you may access web servers as well as databases... +$ curl 172.16.1.67:8080 +Hello, world, cnt=1, hostname=freznicek-ubu +``` + +## How to store project data + +Every project generates an amount of data which needs to be stored. There are options (sorted by preference): + * as objects or files in a [S3 compatible storage](https://en.wikipedia.org/wiki/Amazon_S3) + * S3 compatible storage may be requested as separate cloud storage resource ([OpenStack Swift storage + S3 API](https://docs.openstack.org/swift/latest/s3_compat.html)) + * S3 storage may be also easily launched on one of the project VMs ([minio server](https://github.com/minio/minio)) + * as files on + * separate (ceph) volume + * virtual machine disk volume (i.e. no explicit volume for the project data) + * as objects or files in the [OpenShift Swift storage](https://docs.openstack.org/swift/train/admin/objectstorage-intro.html) + +MetaCentrum Cloud stores raw data: + * in ceph cloud storage on rotation disks (SSDs will be available soon) + * in hypervisor (bare metal) disks (rotational, SSD, SSD NVMe) + +We encourage all users to backup important data themselves while we work on cloud native backup solution. + +## How to compute (scientific) tasks + +Your application may be: + * `A.` single instance application, running on one of cloud computation resources + * `B.` multi-instance application with messaging support (MPI), where all instances run on same cloud computation resource + * `C.` true distributed computing, where application runs in jobs scheduled to multiple cloud computation resources + +Applications running in single cloud resource (`A.` and `B.`) are direct match for MetaCentrum Cloud OpenStack. Distributed applications (`C.`) are best handled by [MetaCentrum PBS system](https://metavo.metacentrum.cz/cs/state/personal). + +## How to create and maintain cloud resources + +Your project is computed within MetaCentrum Cloud Openstack project where you can claim MetaCentrum Cloud Openstack resources (for example virtual machine, floating IP, ...). There are multiple ways how to set-up the MetaCentrum Cloud Openstack resources: + * manually using [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz) (Openstack Horizon) + * automated approaches + * [terraform](https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs) ([example project](https://github.com/terraform-provider-openstack/terraform-provider-openstack/tree/main/examples/app-with-networking)) + * ansible + * [openstack heat](https://docs.openstack.org/heat/train/template_guide/hello_world.html) + +If your project infrastructure (MetaCentrum Cloud Openstack resources) within cloud is static you may select manual approach with [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz). There are projects which need to allocate MetaCentrum Cloud Openstack resources dynamically, in such cases we strongly encourage automation even at this stage. + +## How to transfer your work to cloud resources and make it up-to-date + +There are several options how to transfer project to cloud resources: + * manually with `scp` + * automatically with `ansible` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-native/tasks/init.yml#L29-47)) + * automatically with `terraform` + * indirectly in a project container (example: [releasing a project container image](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/.gitlab-ci.yml#L17-80), [pulling and running a project image](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-container/tasks/deploy.yml#L16-28)) + * indirectly in OpenStack (glance) image (you need to obtain image-uploader role) + * OpenStack Glance images may be [public, private, community or shared](/gui/#image-visibility). + +### ssh to cloud VM resources and manual update + +In this scenario you log to your cloud VM and perform all needed actions manually. This approach does not scale well, is not effective enough as different users may configure cloud VM resources different ways resulting sometimes in different resource behavior. + +### automated work transfer and synchronization with docker (or podman) + +There are automation tools which may help you to ease your cloud usage: + * ansible and/or terraform + * container runtime engine (docker, podman, ...) + +Ansible is cloud automation tool which helps you with: + * keeping your VM updated + * automatically migrating your applications or data to/from cloud VM + + +Container runtime engine helps you to put your into a container stored in a container registry. +Putting your work into container has several advantages: + * share the code including binaries in consistent environment (even across different Operating Systems) + * avoids application [re]compilation in the cloud + * your application running in the container is isolated from the host's container runtime so + * you may run multiple instances easily + * you may easily compare different versions at once without collisions + * you become ready for future kubernetes cloud + +As a container registry we suggest either: + * public quay.io ([you need to register for free first](https://quay.io/signin/)) + * private Masaryk University [registry.gitlab.ics.muni.cz:443](registry.gitlab.ics.muni.cz:443) + +Example of such approach is demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). + +## How to receive data from project's experiments to your workstation + +It certainly depends on how your data are stored, the options are: + * files transfer + * manual file transfer with `scp` (and possibly `sshuttle`) + * automated file transfer with `scp` + `ansible` (and possibly `sshuttle`), demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/ansible/roles/cloud-project-container/tasks/download.yml) + * database data/objects transfer + * data stored in the S3 compatible database may be easily received via [minio client application MC](https://docs.min.io/docs/minio-client-complete-guide) + * date stored in [OpenStack Swift python client `swift`](https://docs.openstack.org/python-swiftclient/train/swiftclient.html) + + +## How to make your application in the cloud highly available + +Let's assume your application is running in multiple instances in cloud already. +To make you application highly available (HA) you need to + * run the application instances on different cloud resources + * use MetaCentrum Cloud load-balancer component (based on [OpenStack Octavia](https://docs.openstack.org/octavia/train/reference/introduction.html#octavia-terminology)) which is goint to balance traffic to one of the app's instances. + +Your application surely need Fully Qualified Domain Name (FQDN) address to become popular. Setting FQDN is done on the public floating IP linked to the load-balancer. + + +## Cloud project example and workflow recommendations + +This chapter summarizes effective cloud workflows on the (example) [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). + +The project recommendations are: + 1. Project files should be versioned in [a VCS](https://en.wikipedia.org/wiki/Version_control) (git) + 1. The project repository should + * contain the documentation + * follow standard directory structure `src/`, `conf/`, `kubernetes/` + * include CI/CD process pipeline ([`.gitlab-ci.yml`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/.gitlab-ci.yml), ...) + * contain deployment manifests or scripts (kubernetes manifests or declarative deployment files (ansible, terraform, puppet, ...)) + 1. The project release should be automated and triggered by pushing a [semver v2 compatible](https://semver.org/) [tag](https://dev.to/neshaz/a-tutorial-for-tagging-releases-in-git-147e) + 1. The project should support execution in a container as there are significant benefits: ([`Dockerfile`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/Dockerfile)) + * consistent environment (surrounding the application) + * application portability across all Operating Systems + * application isolation from host Operating System + * multiple ways how to execute the application (container cloud support advanced container life-cycle management) + 1. The project should have a changelog (either manually written or generated) (for instance [`CHANGELOG.md`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/CHANGELOG.md)) + + +We recommend every project defines cloud usage workflow which may consist of: + 1. Cloud resource initialization, performing + * cloud resource update to latest state + * install necessary tools for project compilation and execution + * test container infrastructure (if it is used) + * transfer project files if need to be compiled + 1. Project deployment (and execution) in the cloud, consisting of + * compilation of the project in the cloud (if native execution selected) + * execution of the project application[s] in the cloud + * storing or collecting project data and logs + 1. Download project data from cloud to workstation (for further analysis or troubleshooting) + * download of project data from cloud to user's workstation + 1. Cloud resource destroy + +## Road-map to effective cloud usage + +A project automation is usually done in CI/CD pipelines. Read [Gitlab CI/CD article](https://docs.gitlab.com/ee/ci/introduction/) for more details. + + +Following table shows the different cloud usage phases: + +| Cloud usage phase | Cloud resource management | Project packaging | Project deployment | Project execution | Project data synchronization | Project troubleshooting | +| :--- | :---: | :-----------: | :------: | :------------: | :------------: | :------------: | +| ineffective manual approach | manual (`ssh`) | manually built binaries (versioned?) | manual deployment (scp) | manual execution (ssh) | manual transfers (scp) | manual investigation on VM (scp) | +| ... | ... | ... | ... | ... | ... | ... | +| [continuous delivery](https://docs.gitlab.com/ee/ci/introduction/#continuous-delivery) (automated, but deploy manual) | semi-automated (GUI + `ansible` executed manually) | container ([semver](https://semver.org) versioned) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` and `ssh` manually) | +| [continuous deployment](https://docs.gitlab.com/ee/ci/introduction/#continuous-deployment) (fully-automated) | automated (`terraform` and/or `ansible` in CI/CD) | container ([semver](https://semver.org) versioned) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | semi-automated (`ansible` in CI/CD and `ssh` manually) | + + + +## How to convert legacy application into a container for a cloud? + +Containerization of applications is one of the best practices when you want to share your application and execute in a cloud. Read about [the benefits](https://cloud.google.com/containers). + +Application containerization process consists of following steps: + * Select a container registry (where container images with your applications are stored) + * Publicly available registries like [quay.io](https://quay.io) are best as everyone may receive your application even without credentials + * Your project applications should be containerized via creating a `Dockerfile` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/Dockerfile)) + * Follow [docker guide](https://www.freecodecamp.org/news/a-beginners-guide-to-docker-how-to-create-your-first-docker-application-cc03de9b639f/) if you are not familiar with `Dockerfile` syntax + * If your project is huge and contains multiple applications, then it is recommended to divide them in few parts by topic each part building separate container. + * Project CI/CD jobs should build applications, create container image[s] and finally release (push) container image[s] with applications to container registry + * Everyone is then able to use your applications (packaged in a container image) regardless of which Operating System (OS) he or she uses. Container engine (docker, podman, ...) is available for all mainstream OSes. + * Cloud resources are then told to pull and run your container image[s]. ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-container/tasks/deploy.yml#L11-28)) + +Learn best-practices on our cloud example [project `cloud-estimate-pi`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). + + + + + diff --git a/content/cloud/cli/images/app_creds_1.png b/content/cloud/cli/images/app_creds_1.png new file mode 100644 index 0000000000000000000000000000000000000000..d3a376103c6cef0213835e00a76b911dd4a80c31 Binary files /dev/null and b/content/cloud/cli/images/app_creds_1.png differ diff --git a/content/cloud/cli/images/app_creds_2.png b/content/cloud/cli/images/app_creds_2.png new file mode 100644 index 0000000000000000000000000000000000000000..88f35f4719bf88d63384c7b4cce1910bcae56986 Binary files /dev/null and b/content/cloud/cli/images/app_creds_2.png differ diff --git a/content/cloud/cli/index.md b/content/cloud/cli/index.md new file mode 100644 index 0000000000000000000000000000000000000000..435f2de82437f91ef03f8db38c2c4eb936892368 --- /dev/null +++ b/content/cloud/cli/index.md @@ -0,0 +1,280 @@ +--- +title: "Command Line Interface" +date: 2021-05-18T11:22:35+02:00 +draft: false +--- + +In order to have access to OpenStack's API, you have to use so-called OpenStack Application Credentials. In short, +it is a form of token-based authentication providing easy and secure access without the use of passwords. + +## Getting Credentials +1. In **Identity > Application Credentials**, click on **Create Application Credential**. +2. Choose name, description and expiration date & time. + +  + +{{< hint info >}} +**Notice:** + +Do NOT select specific roles, unless directed otherwise by user support. +{{< /hint >}} + +{{< hint info >}} +**Notice:** + +If you decide to select specific roles, you should always include at least the **member** role. +If you are planning to use the orchestration API, add the **heat_stack_owner** role as well and +check **Unrestricted**. +{{< /hint >}} + +3. Download provided configuration files for the OpenStack CLI client. + +  + +## Setting Up +1. [Install](https://pypi.org/project/python-openstackclient/) and + [configure](https://docs.openstack.org/python-openstackclient/train/configuration/index.html) + OpenStack CLI client. + +{{< hint danger >}} +**WARNING:** + +Add the following line to the **openrc** file: + +`export OS_VOLUME_API_VERSION=3` + +Add the following line to the **clouds.yaml** file: + +`volume_api_version: 3` +{{< /hint >}} + + +2. Follow the official [Launch instances](https://docs.openstack.org/nova/train/user/launch-instances.html) guide. + +--- + + +## Creating a key-pair + +You can either get your private key from dashboard or you can use **ssh-keygen** command to create new private key: + +``` +ssh-keygen -b 4096 +``` +then you will be asked to specify output file and passphrase for your key. + + +1. Assuming your ssh public key is stored in `~/.ssh/id_rsa.pub` +``` +openstack keypair create --public-key ~/.ssh/id_rsa.pub my-key1 +``` + +## Create security group +1. Create: +``` +openstack security group create my-security-group +``` + +2. Add rules to your security group: +``` +openstack security group rule create --description "Permit SSH" --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22 --ingress my-security-group +openstack security group rule create --description "Permit ICMP (any)" --remote-ip 0.0.0.0/0 --protocol icmp --icmp-type -1 --ingress my-security-group +``` + +3. Verify: +``` + openstack security group show my-security-group +``` + +## Create network + +1. Create network + subnet (from auto-allocated pool) +``` +openstack network create my-net1 +openstack subnet create --network my-net1 --subnet-pool private-192-168 my-sub1 +``` + +##Router management + +### Router Creation + +2. Create router: +``` +openstack router create my-router1 +``` +Current router have no ports, which makes it pretty useless, we need to create at least 2 interfaces (external and internal) + +3. Set external network for router (let us say public-muni-147-251-124), and the external port will be created automatically: +``` +openstack router set --external-gateway public-muni-147-251-124 my-router1 +``` + +4. Check which IP address is set as gateway for our subnet (default: first address of the subnet): +``` +GW_IP=$(openstack subnet show my-sub1 -c gateway_ip -f value) +``` + +5. Create internal port for router (gateway for the network my-net1): +``` +openstack port create --network my-net1 --disable-port-security --fixed-ip ip-address=$GW_IP my-net1-port1-gw +``` + +6. Add port to the router: +``` +openstack router add port my-router1 my-net1-port1-gw +``` +### Clear gateway + +1. Find your router: +``` +$ openstack router list ++--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+ +| ID | Name | Status | State | Distributed | HA | Project | ++--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+ +| 0bd0374d-b62e-429a-8573-3e8527399b68 | auto_allocated_router | ACTIVE | UP | None | None | f0c339b86ddb4699b6eab7acee8d4508 | ++--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+ +``` + +2. Verify: +``` +$ openstack router show 0bd0374d-b62e-429a-8573-3e8527399b68 ++-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| admin_state_up | UP | +| availability_zone_hints | None | +| availability_zones | None | +| created_at | 2019-06-06T04:47:15Z | +| description | None | +| distributed | None | +| external_gateway_info | {"network_id": "8d5e18ab-5d43-4fb5-83e9-eb581c4d5365", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "41e0cd1c-5ab8-465f-8605-2e7d6a3fe5b4", "ip_address": "147.251.124.177"}]} | +| flavor_id | None | +| ha | None | +| id | 0bd0374d-b62e-429a-8573-3e8527399b68 | +| interfaces_info | [{"port_id": "92c3f6fe-afa8-47c6-a1a6-f6a1b3c54f72", "ip_address": "192.168.8.193", "subnet_id": "e903d5b9-ac90-4ca8-be2c-c509a0153982"}] | +| location | Munch({'cloud': '', 'region_name': 'brno1', 'zone': None, 'project': Munch({'id': 'f0c339b86ddb4699b6eab7acee8d4508', 'name': None, 'domain_id': None, 'domain_name': None})}) | +| name | auto_allocated_router | +| project_id | f0c339b86ddb4699b6eab7acee8d4508 | +| revision_number | 24 | +| routes | | +| status | ACTIVE | +| tags | | +| updated_at | 2019-06-06T06:34:34Z | ++-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +``` +3. Unset gateway (by ID of the router): + +``` +$ openstack router unset --external-gateway 0bd0374d-b62e-429a-8573-3e8527399b68 +``` + +### Set Gateway + +1. Choose a new external network: + +``` +$ openstack network list ++--------------------------------------+--------------------------+--------------------------------------+ +| ID | Name | Subnets | ++--------------------------------------+--------------------------+--------------------------------------+ +| 410e1b3a-1971-446b-b835-bf503917680d | public-cesnet-78-128-251 | 937106e2-3d51-43cc-83b6-c779465011e5 | +| 8d5e18ab-5d43-4fb5-83e9-eb581c4d5365 | public-muni-147-251-124 | 41e0cd1c-5ab8-465f-8605-2e7d6a3fe5b4 | +| c708270d-0545-4be2-9b8f-84cf75ce09cf | auto_allocated_network | e903d5b9-ac90-4ca8-be2c-c509a0153982 | +| d896044f-90eb-45ee-8cb1-86bf8cb3f9fe | private-muni-10-16-116 | 3d325abf-f9f8-4790-988f-9cd3d1dea4f3 | ++--------------------------------------+--------------------------+--------------------------------------+ +``` + +2. Set the new external network for the router + +``` +$ openstack router set --external-gateway public-cesnet-78-128-251 0bd0374d-b62e-429a-8573-3e8527399b68 +``` + + +## Create volume +<div style="border-width:0;border-left:5px solid #b8d6f4;background-color:rgba(228,240,251,0.3);margin:20px 0;padding:10px 20px;font-size:15px;"> + <strong>WARNING:</strong><br/> + Skipping this section can lead to unreversible loss of data + </div> + +Volumes are create automatically when creating an instance in GUI, but we need to create them manually in case of CLI + +1. Create bootable volume from image(e.g. centos): +``` +openstack volume create --image "centos-7-1809-x86_64" --size 40 my_vol1 +``` + +## Create server + +1. Create instance: +``` +openstack server create --flavor "standard.small" --volume my_vol1 \ + --key-name my-key1 --security-group my-security-group --network my-net1 my-server1 +``` + +## Floating ip address management + +### Creating and assigning new FIP + +1. Allocate new Floating IPs: + +``` +$ openstack floating ip create public-cesnet-78-128-251 ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| Field | Value | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +| created_at | 2019-06-06T06:56:51Z | +| description | | +| dns_domain | None | +| dns_name | None | +| fixed_ip_address | None | +| floating_ip_address | 78.128.251.27 | +| floating_network_id | 410e1b3a-1971-446b-b835-bf503917680d | +| id | d054b6b3-bbd3-485d-a46b-b80682df8fc8 | +| location | Munch({'cloud': '', 'region_name': 'brno1', 'zone': None, 'project': Munch({'id': 'f0c339b86ddb4699b6eab7acee8d4508', 'name': None, 'domain_id': None, 'domain_name': None})}) | +| name | 78.128.251.27 | +| port_details | None | +| port_id | None | +| project_id | f0c339b86ddb4699b6eab7acee8d4508 | +| qos_policy_id | None | +| revision_number | 0 | +| router_id | None | +| status | DOWN | +| subnet_id | None | +| tags | [] | +| updated_at | 2019-06-06T06:56:51Z | ++---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +``` + +2. And assign it to your server: + +``` +$ openstack server add floating ip net-test1 78.128.251.27 +``` + +### Remove existing floating ip + +1. List your servers: + +``` +$ openstack server list ++--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+ +| ID | Name | Status | Networks | Image | Flavor | ++--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+ +| 1a0d4624-5294-425a-af37-a83eb0640e1c | net-test1 | ACTIVE | auto_allocated_network=192.168.8.196, 147.251.124.248 | | standard.small | ++--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+ +``` + +2. remove floating ips: + +``` +$ openstack server remove floating ip net-test 147.251.124.248 +$ openstack floating ip delete 147.251.124.248 +``` + + +## Cloud tools +You can inspect cloud tools [here](/tools) + +## Full Reference +See [OpenStack CLI Documentation](https://docs.openstack.org/python-openstackclient/train/). \ No newline at end of file diff --git a/content/cloud/contribute/index.md b/content/cloud/contribute/index.md new file mode 100644 index 0000000000000000000000000000000000000000..769cba6505560c78c1f32b80b1562382487704c4 --- /dev/null +++ b/content/cloud/contribute/index.md @@ -0,0 +1,80 @@ +--- +title: "Contribute" +date: 2021-05-18T11:22:35+02:00 +draft: false +--- + +{{< hint danger >}} +**WARNING:** + +This page requires an update. +{{< /hint >}} + + +## Requirements +Working with our documentation requires the following tools: +* *git* for version control +* *nodejs* and *gitbook* for content management + +This documentation is written in the *Markdown* markup language. + +```bash +# Debian +apt-get install nodejs git +``` +```bash +# CentOS +yum install nodejs git +``` +```bash +# Fedora +dnf install nodejs git +``` +Or see [NodeJS Documentation](https://nodejs.org/en/download/package-manager/) for distro-specific instructions. + +## Work-flow Overview +1. Fork & clone repository +2. Create a branch +3. Commit your changes +4. Push to the branch +5. Create a Merge Request with the content of your branch + +## Fork Repository +See [GitLab @ ICS MU](https://gitlab.ics.muni.cz/cloud/documentation/forks/new) for details. This will create your own clone of our repository where you will be able to make changes. Once you are happy with your changes, use GitLab to submit them to our original repository. + +## Clone Repository +```bash +# after creating your own copy of the repository on GitLab +git clone git@gitlab.ics.muni.cz:${GITLAB_USER}/documentation.git +``` + +## Create New Branch +```bash +# in `documentation` +git checkout -b my_change +``` + +## Install GitBook +```bash +npm install gitbook-cli -g + +# in `documentation` +gitbook install +``` +This step MAY require `sudo` depending on your system and NodeJS installation method. + +## Edit GitBook +```bash +# in `documentation` +gitbook serve +``` +> Edits will be show live in your browser window, no need to refresh. + +## Commit and Push Changes +```bash +git commit -am "My updates" +git push origin my_change +``` + +## Submit Changes +Create a *Merge Request* via [GitLab @ ICS MU](https://gitlab.ics.muni.cz/cloud/documentation/merge_requests/new). \ No newline at end of file diff --git a/content/cloud/news/index.md b/content/cloud/news/index.md new file mode 100644 index 0000000000000000000000000000000000000000..1f16cfb3791a83caf6c113f9739bf451d8255474 --- /dev/null +++ b/content/cloud/news/index.md @@ -0,0 +1,41 @@ +--- +title: "News" +date: 2021-05-18T11:22:35+02:00 +draft: false +--- + + +**2021-05-21** Flavor list was created and published. Also parameters of following flavors were changed: + +* hpc.8core-64ram +* hpc.8core-16ram +* hpc.16core-32ram +* hpc.18core-48ram +* hpc.small +* hpc.medium +* hpc.large +* hpc.xlarge +* hpc.xlarge-memory +* hpc.16core-128ram +* hpc.30core-64ram +* hpc.30core-256ram +* hpc.ics-gladosag-full +* csirtmu.tiny1x2 + +None of the parameters were decreased but increased. Updated parameters were Net througput, IOPS and Disk througput. Existing instances will have the previous parameters so if you want to get new parameters, **make a data backup** and rebuild your instance You can check list of flavors [here](/flavors/README.md). + +**2021-04-13** OpenStack image `centos-8-1-1911-x86_64_gpu` deprecation in favor of `centos-8-x86_64_gpu`. Deprecated image will be still available for existing VM instances, but will be moved from public to community images in about 2 months. + +**2021-04-05** OpenStack images renamed + +**2021-03-31** User documentation update + +**2020-07-24** Octavia service (LBaaS) released + +**2020-06-11** [Public repository](https://gitlab.ics.muni.cz/cloud/cloud-tools) where Openstack users can find usefull tools + +**2020-05-27** Openstack was updated from `stein` to `train` version + +**2020-05-13** Ubuntu 20.04 LTS (Focal Fossa) available in image catalog + +**2020-05-01** Released [Web page](https://projects.cloud.muni.cz/) for requesting Openstack projects