From 25ebc12cd1007abcb0fa6501ca53f863032d8b6c Mon Sep 17 00:00:00 2001 From: 506487 <506487@mail.muni.cz> Date: Mon, 21 Jun 2021 00:45:00 +0200 Subject: [PATCH] Added cosmetic and grammar fixes --- content/_index.md | 8 +- content/cloud/about/_index.md | 22 ++--- content/cloud/advanced-features/index.md | 114 +++++++++++------------ content/cloud/best-practices/index.md | 69 +++++++------- content/cloud/cli/index.md | 28 +++--- content/cloud/contribute/index.md | 14 +-- content/cloud/faq/index.md | 33 ++++--- content/cloud/network/index.md | 57 ++++++------ content/cloud/news/index.md | 4 +- content/cloud/putty/index.md | 22 ++--- content/cloud/quick-start/index.md | 38 ++++---- content/cloud/register/index.md | 4 +- content/cloud/tools/index.md | 2 +- content/cloud/windows/index.md | 12 +-- 14 files changed, 210 insertions(+), 217 deletions(-) diff --git a/content/_index.md b/content/_index.md index 4e5c6fa..d0bec7b 100644 --- a/content/_index.md +++ b/content/_index.md @@ -10,7 +10,7 @@ disableToc: true **WARNING** [User projects](/cloud/register/#personal-project) (generated for every user) are not meant to contain production machines. -If you use your personal project for long-term services, you have to to ask for a [GROUP](/cloud/register/#group-project) project (even if you do not work in a group or you do not need any extra quotas). +If you use your personal project for long-term services, you have to ask for a [GROUP](/cloud/register/#group-project) project (even if you do not work in a group or you do not need any extra quotas). {{</hint>}} @@ -22,7 +22,7 @@ point for most users. MetaCentrum Cloud is the [IaaS cloud](https://en.wikipedia.org/wiki/Infrastructure_as_a_service) on top of [open-source OpenStack project](https://opendev.org/openstack). Users may configure and use cloud resources for reaching individual (scientific) goals. -Most important cloud resources are: +The most important cloud resources are: * virtual machines * virtual networking (VPNs, firewalls, routers) * private and/or public IP addresses @@ -38,7 +38,7 @@ section and make sure they have an active user account and required permissions to access the service. __Beginners__ should start in the [Quick Start](cloud/quick-start) -section which provides a step-by-step guide for starting the first +a section that provides a step-by-step guide for starting the first virtual machine instance. __Advanced users__ should continue in the [Advanced Features](cloud/gui) @@ -60,4 +60,4 @@ use of our infrastructure. If you need more information, please turn to [the official documentation](https://docs.openstack.org/train/user/) or contact user support and describe your use case. -Please visit [Network](cloud/network) section in order to see how you should set up the network. +Please visit the [Network](cloud/network) section to see how you should set up the network. diff --git a/content/cloud/about/_index.md b/content/cloud/about/_index.md index ad1cac8..3986a16 100644 --- a/content/cloud/about/_index.md +++ b/content/cloud/about/_index.md @@ -6,19 +6,19 @@ weight: -100 --- ## Hardware -MetaCentrum Cloud consist of 13 computational clusters containing 283 hypervisors -with sum of 9560 cores, 62 GPU cards and 184 TB RAM. For applications with special demands cluster -with local SSDs and GPU cards is available. OpenStack instances, object store and image store -can leverage more than 1.5 PTB highly available capacity provided by CEPH storage system. +MetaCentrum Cloud consists of 13 computational clusters containing 283 hypervisors +with a sum of 9560 cores, 62 GPU cards, and 184 TB RAM. For applications with special demands cluster +with local SSDs and GPU cards is available. OpenStack instances, object store, and image store +can leverage more than 1.5 PTB highly available capacity provided by the CEPH storage system. ## Software MetaCentrum Cloud is built on top of OpenStack, which is a free open standard cloud computing platform -and one of the top 3 most active open source projects in the world. New OpenStack major version is +and one of the top 3 most active open source projects in the world. The New OpenStack major version is released twice a year. OpenStack functionality is separated into more than 50 services. ## Number of usage -More than 400 users are using MetaCentrum Cloud platform and more than 130k VMs were started last year. +More than 400 users are using the MetaCentrum Cloud platform and more than 130k VMs were started last year. ## MetaCentrum Cloud current release @@ -26,11 +26,11 @@ OpenStack Train ## Deployed services -Following table contains list of OpenStack services deployed in MetaCentrum Cloud. Services are separated -into two groups based on their stability and level of support we are able to provide. All services in production -group are well tested by our team and are covered by support of cloud@metacentrum.cz. To be able to support -variety of experimental cases we are planning to deploy several services as experimental, which can be useful -for testing purposes, but it's functionality won't be covered by support of cloud@metacentrum.cz. +The following table contains a list of OpenStack services deployed in MetaCentrum Cloud. Services are separated +into two groups based on their stability and the level of support we are able to provide. All services in the production +group are well tested by our team and are covered by the support of cloud@metacentrum.cz. To be able to support +a variety of experimental cases we are planning to deploy several services as experimental, which can be useful +for testing purposes, but its functionality won't be covered by the support of cloud@metacentrum.cz. | Service | Description | Type | |-----------|------------------------|--------------| diff --git a/content/cloud/advanced-features/index.md b/content/cloud/advanced-features/index.md index 47a392a..336a264 100644 --- a/content/cloud/advanced-features/index.md +++ b/content/cloud/advanced-features/index.md @@ -15,16 +15,16 @@ For basic instructions on how to start a virtual machine instance, see [Quick St The OpenStack orchestration service can be used to deploy and manage complex virtual topologies as single entities, including basic auto-scaling and self-healing. -**This feature is provided as is and configuration is entirely the responsibility of the user.** +**This feature is provided as it is and configuration is entirely the responsibility of the user.** For details, refer to [the official documentation](https://docs.openstack.org/heat-dashboard/train/user/index.html). ## Image upload -We don't support uploading own images by default. MetaCentrum Cloud images are optimized for running in the cloud and we recommend users -to customize them instead of building own images from scratch. If you need upload custom image, please contact user support for appropriate permissions. +We don't support uploading personal images by default. MetaCentrum Cloud images are optimized for running in the cloud and we recommend users +customize them instead of building their own images from scratch. If you need to upload a custom image, please contact user support for appropriate permissions. -Instructions for uploading custom image: +Instructions for uploading a custom image: 1. Upload only images in RAW format (not qcow2, vmdk, etc.). @@ -38,23 +38,23 @@ hw_rng_model=virtio hw_qemu_guest_agent=yes os_require_quiesce=yes ``` -Following needs to be setup correctly (consult official [documentation](https://docs.openstack.org/glance/train/admin/useful-image-properties.html#image-property-keys-and-values)) +Following needs to be set up correctly (consult official [documentation](https://docs.openstack.org/glance/train/admin/useful-image-properties.html#image-property-keys-and-values)) or instances won't start: ``` os_type=linux # example os_distro=ubuntu # example ``` -4. Images should contain cloud-init, qemu-guest-agent and grow-part tools +4. The image should contain cloud-init, qemu-guest-agent, and grow-part tools -5. OpenStack will resize instance after start. Image shouldn't contain any empty partitions or free space +5. OpenStack will resize an instance after the start. The image shouldn't contain any empty partitions or free space -For mor detailed explanation about CLI work with images, please refer to [https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html](https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html). +For a more detailed explanation about CLI work with images, please refer to [https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html](https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html). ## Image visibility -In OpenStack there are 4 possible visibilities of particular image: **public, private, shared, community**. +In OpenStack there are 4 possible visibilities of a particular image: **public, private, shared, community**. ### 1. Public image @@ -62,18 +62,18 @@ In OpenStack there are 4 possible visibilities of particular image: **public, p ### 2. Private image - **Private image** is an image visible to only to owner of that image. This is default setting for all newly created images. + **Private image** is an image visible only to the owner of that image. This is the default setting for all newly created images. ### 3. Shared image - **Shared image** is an image visible to only to owner and possibly certain groups that owner specified. How to share an image between project, please read following [tutorial](#image-sharing-between-projects) below. + **Shared image** is an image visible only to the owner and possibly certain groups that the owner specified. How to share an image between projects, please read the following [tutorial](#image-sharing-between-projects) below. ### 4. Community image - **Community image** is an image that is accesible to everyone, however it is not visible in dashboard. These images can be listed in CLI via command: + **Community image** is an image that is accessible to everyone, however, it is not visible in the dashboard. These images can be listed in CLI via command: ```openstack image list --community```. - This is especially beneficial in case of great number of users who should get access to this image or if you own image that is old but some users might still require that image. In that case you can make set old image and **Community image** and set new one as default. + This is especially beneficial in case of a great number of users who should get access to this image or if you own an old image but some users might still require that image. In that case, you can make set the old image and **Community image** and set the new one as default. {{< hint danger >}} **WARNING** @@ -86,25 +86,25 @@ To create or upload this image you must have an <b>image_uploader</b> right. ```openstack image create --file test-cirros.raw --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_rng_model=virtio --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes --property os_type=linux --community test-cirros``` -For more detailed explanation about these properties, go to the following link: [https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design](https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design). +For a more detailed explanation about these properties, go to the following link: [https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design](https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design). ## Image sharing between projects -Image sharing allows you to share your image between different projects and then it is possible to launch instances from that image in those projects with other collaborators etc. As mentioned in section about CLI, you will need to use your OpenStack credentials from ```openrc``` or ```cloud.yaml``` file. +Image sharing allows you to share your image between different projects and then it is possible to launch instances from that image in those projects with other collaborators etc. As mentioned in a section about CLI, you will need to use your OpenStack credentials from ```openrc``` or ```cloud.yaml``` file. -Then to share an image you need to know it's ID, which you can find with command: +Then to share an image you need to know its ID, which you can find with the command: ``` openstack image show <name_of_image> ``` -where ```name_of_image``` is name of image you want to share. +where ```name_of_image``` is the name of the image you want to share. -After that you will also have to know ID of project you want to share your image with. If you do not know ID of that project you can use following command, which can help you find it: +After that, you will also have to know the ID of the project you want to share your image with. If you do not know the ID of that project you can use the following command, which can help you find it: ``` openstack project list | grep <name_of_other_project> ``` -where ```<name_of_project>``` is name of other project. It's ID will show up in first column. +where ```<name_of_project>``` is the name of the other project. Its ID will show up in the first column. -Now all with necessary IDs you can now share your image. First you need to set an attribute of image to `shared` by following command: +Now all with the necessary IDs, you can share your image. First, you need to set an attribute of the image to `shared` by the following command: ``` openstack image set --shared <image_ID> ``` @@ -112,18 +112,18 @@ And now you can share it with your project by typing this command: ``` openstack image add project <image_ID> <ID_of_other_project> ``` -where ```ID_of_other_project``` is ID of project you want to share image with. +where ```ID_of_other_project``` is the ID of the project you want to share the image with. -Now you can check if user of other project accepted your image by command: +Now you can check if the user of the other project accepted your image by command: ``` openstack image member list <image_ID> ``` -If the other user did not accepted your image yet, status column will contain value: ```pending```. +If the other user did not accept your image yet, the status column will contain the value: ```pending```. **Accepting shared image** -To accept shared image you need to know ```<image_ID>``` of image that other person wants to share with you. To accept shared image to your project -you need to use following command: +To accept a shared image you need to know ```<image_ID>``` of the image that the other person wants to share with you. To accept shared image to your project +you need to use the following command: ``` openstack image set --accept <image_ID> ``` @@ -133,65 +133,65 @@ openstack image list | grep <image_ID> ``` **Unshare shared image** -As owner of the shared image, you can check all projects that have access to the shared image by following command: +As an owner of the shared image, you can check all projects that have access to the shared image by the following command: ``` openstack image member list <image_ID> ``` -When you find ```<ID_project_to_unshare>``` of project, you can cancel access of that project to shared image by command: +When you find ```<ID_project_to_unshare>``` of project, you can cancel the access of that project to the shared image by command: ``` openstack image remove project <image ID> <ID_project_to_unshare> ``` ## Add SWAP file to instance -By default VMs after creation do not have SWAP partition. If you need to add a SWAP file to your system you can download and run [script](https://gitlab.ics.muni.cz/cloud/cloud-tools/-/blob/master/swap/swap.sh) that create SWAP file on your VM. +By default VMs after creation do not have SWAP partition. If you need to add a SWAP file to your system you can download and run [script](https://gitlab.ics.muni.cz/cloud/cloud-tools/-/blob/master/swap/swap.sh) that create a SWAP file on your VM. ## Local SSDs -Default MetaCentrum Cloud storage is implemented via CEPH storage cluster deployed on top of HDDs. This configuration should be sufficient for most cases. -For instances, that requires high throughput and IOPS, it is possible to utilize hypervizor local SSDs. Requirements for instances on hypervizor local SSD: -* instances can be deployed only via API (CLI, Ansible, Terraform ...), instances deployed via web gui (Horizon) will always use CEPH for it's storage +Default MetaCentrum Cloud storage is implemented via the CEPH storage cluster deployed on top of HDDs. This configuration should be sufficient for most cases. +For instances, that require high throughput and IOPS, it is possible to utilize hypervisor local SSDs. Requirements for instances on hypervisor local SSD: +* instances can be deployed only via API (CLI, Ansible, Terraform ...), instances deployed via web GUI (Horizon) will always use CEPH for its storage * supported only by flavors with ssd-ephem suffix (e.g. hpc.4core-16ram-ssd-ephem) * instances can be rebooted without prior notice or you can be required to delete them -* you can request them, when asking for new project, or for existing project on cloud@metacentrum.cz +* you can request them when asking for a new project, or an existing project on cloud@metacentrum.cz ## Affinity policy -Affinity policy is a tool users can use to deploy nodes of cluster on same physical machine or if they should be spread among other physical machines. This can be beneficial if you need fast communication between nodes or you need them to be spreaded due to load-balancing or high-availability etc. For more info please refer to [https://docs.openstack.org/senlin/train/scenarios/affinity.html](https://docs.openstack.org/senlin/train/scenarios/affinity.html). +Affinity policy is tool users can use to deploy nodes of a cluster on the same physical machine or if they should be spread among other physical machines. This can be beneficial if you need fast communication between nodes or you need them to be spread due to load-balancing or high availability etc. For more info please refer to [https://docs.openstack.org/senlin/train/scenarios/affinity.html](https://docs.openstack.org/senlin/train/scenarios/affinity.html). ## LBaaS - OpenStack Octavia -Load Balancer is a tool used for distributing a set of tasks over particular set of resources. Its main goal is to find an optimal use of resources and make processing of particular tasks more efficient. +Load Balancer is a tool used for distributing a set of tasks over a particular set of resources. Its main goal is to find the optimal use of resources and make the processing of particular tasks more efficient. -In following example you can see how basic HTTP server is deployed via CLI. +In the following example, you can see how a basic HTTP server is deployed via CLI. **Requirements**: -- 2 instances connected to same internal subnet and configured with HTTP application on TCP port 80 +- 2 instances connected to the same internal subnet and configured with HTTP application on TCP port 80 ``` openstack loadbalancer create --name my_lb --vip-subnet-id <external_subnet_ID> ``` -where **<external_subnet_ID>** is an ID of external shared subnet created by cloud admins reachable from Internet. +where **<external_subnet_ID>** is an ID of external shared subnet created by cloud admins reachable from the Internet. -You can check newly created Load Balancer by running following command: +You can check the newly created Load Balancer by running the following command: ``` openstack loadbalancer show my_lb ``` -Now you must create listener on port 80 to enable incoming traffic by following command: +Now you must create a listener on port 80 to enable incoming traffic by the following command: ``` openstack loadbalancer listener create --name listener_http --protocol HTTP --protocol-port 80 my_lb ``` -Now you must add a pool on created listener to setup configuration for Load Balancer. You can do it by following command: +Now you must add a pool on the created listener to set up the configuration for Load Balancer. You can do it by the following command: ``` openstack loadbalancer pool create --name pool_http --lb-algorithm ROUND_ROBIN --listener listener_http --protocol HTTP ``` -Here you created pool using Round Robin algorithm for load balancing. +Here you created a pool using the Round Robin algorithm for load balancing. And now you must configure both nodes to join to Load Balancer: @@ -199,7 +199,7 @@ And now you must configure both nodes to join to Load Balancer: openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.15 --protocol-port 80 pool_http openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.16 --protocol-port 80 pool_http ``` -where **<internal_subnet_ID>** is an ID of internal subnet used by your instances and **--address** specifies an adress of concrete instance. +where **<internal_subnet_ID>** is an ID of internal subnet used by your instances and **--address** specifies an address of the concrete instance. For more info, please refer to [https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html#basic-lb-with-hm-and-fip](https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html#basic-lb-with-hm-and-fip). @@ -207,16 +207,16 @@ For more info, please refer to [https://docs.openstack.org/octavia/train/user/gu {{<hint info>}} **NOTICE:** -Sometimes it can happen that Load Balancer is working but connection is not working because it is not added into security groups. So to prevent this don't forget to apply neutron security group to amphorae created on the LB network to allow traffic reaching the configured load balancer. See [the load balancer deployment walkthrough](https://docs.openstack.org/octavia/train/contributor/guides/dev-quick-start.html?highlight=security%20group#production-deployment-walkthrough) for more details. +Sometimes it can happen that Load Balancer is working but the connection is not working because it is not added into security groups. So to prevent this don't forget to apply neutron security group to amphorae created on the LB network to allow traffic to reach the configured load balancer. See [the load balancer deployment walkthrough](https://docs.openstack.org/octavia/train/contributor/guides/dev-quick-start.html?highlight=security%20group#production-deployment-walkthrough) for more details. {{</hint>}} -LBaaS (Load Balancer as a service) provides user with load balancing service, that can be fully managed via OpenStack API (some basic tasks are supported by GUI). Core benefits: -* creation and management of load balancer resources can be easily automatized via API, or existing tools like Ansible or Terraform +LBaaS (Load Balancer as a service) provides the user with a load balancing service, that can be fully managed via OpenStack API (some basic tasks are supported by GUI). Core benefits: +* creation and management of load balancer resources can be easily automatized via API or existing tools like Ansible or Terraform * applications can be easily scaled by starting up more OpenStack instances and registering them into the load balancer * public IPv4 addresses saving - you can deploy one load balancer with one public IP and serve multiple services on multiple pools of instances by TCP/UDP port or L7 policies -**This feature is provided as is and configuration is entirely the responsibility of the user.** +**This feature is provided as it is and configuration is entirely the responsibility of the user.** Official documentation for LBaaS (Octavia) service - https://docs.openstack.org/octavia/latest/user/index.html @@ -224,11 +224,11 @@ Official documentation for LBaaS (Octavia) service - https://docs.openstack.org/ ### Terraform -Terraform is the best orchestration tool for creating and managing cloud infrastructure. It is capable of greatly simplifying cloud operations. It gives you an option if something goes goes wrong you can easily rebuild your cloud infrastructure. +Terraform is the best orchestration tool for creating and managing cloud infrastructure. It is capable of greatly simplifying cloud operations. It gives you an option if something goes wrong you can easily rebuild your cloud infrastructure. -It manages resources like virtual machines,DNS records etc.. +It manages resources like virtual machines, DNS records, etc. -It is managed through configuration templates containing info about its tasks and resources. They are saved as *.tf files. If configuration changes, Terraform is to able to detect it and create additional operations in order to apply those changes. +It is managed through configuration templates containing info about its tasks and resources. They are saved as *.tf files. If configuration changes, Terraform can detect it and create additional operations to apply those changes. Here is an example how this configuration file can look like: @@ -246,7 +246,7 @@ default = "~/.ssh/id_rsa" } ``` - You can use OpenStack Provider which is tool for managing resources OpenStack supports via Terraform. Terraform has an advantage over Heat because it can be used als in other architectures, not only in OpenStack + You can use OpenStack Provider which is a tool for managing resources OpenStack supports via Terraform. Terraform has an advantage over Heat because it can be used also in other architectures, not only in OpenStack For more detail please refer to [https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs](https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs) and [https://www.terraform.io/intro/index.html](https://www.terraform.io/intro/index.html). @@ -255,9 +255,9 @@ For more detail please refer to [https://registry.terraform.io/providers/terrafo ### Heat Heat is another orchestration tool used for managing cloud resources. This one is OpenStack exclusive so you can't use it anywhere else. Just like Terraform it is capable of simplifying orchestration operations in your cloud infrastructure. -It also uses configuration templates for specification of information about resources and tasks. You can manage resources like servers, floating ips, volumes, security groups etc. via Heat. +It also uses configuration templates for the specification of information about resources and tasks. You can manage resources like servers, floating IPs, volumes, security groups, etc. via Heat. -Here is an example of Heat configuration template in form of *.yaml file: +Here is an example of a Heat configuration template in form of a *.yaml file: ``` @@ -280,13 +280,13 @@ You can find more information here [https://wiki.openstack.org/wiki/Heat](https: OpenStack supports object storage based on [OpenStack Swift](https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html). Creation of object storage container (database) is done by clicking on `+Container` on [Object storage containers page](https://dashboard.cloud.muni.cz/project/containers). -Every object typically contains data along with metadata and unique global identifier to access it. OpenStack allows you to upload your files via HTTPs protocol. There are two ways managing created object storage container: +Every object typically contains data along with metadata and a unique global identifier to access it. OpenStack allows you to upload your files via HTTPS protocol. There are two ways of managing created object storage container: 1. Use OpenStack component [Swift](https://docs.openstack.org/swift/train/admin/index.html) 2. Use [S3 API](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) -In both cases you will need application credentials to be able to manage your data. +In both cases, you will need application credentials to be able to manage your data. ### Swift credentials @@ -295,7 +295,7 @@ The easiest way to generate **Swift** storage credentials is through [MetaCentru ### S3 credentials -If you want to use **S3 API** you will need to generate ec2 credentials for access. Note that to generate ec2 credentials you will also need credentials containing role of **heat_stack_owner**. Once you sourced your credentials for CLI you can generate ec2 credentials by following command: +If you want to use **S3 API** you will need to generate ec2 credentials for access. Note that to generate ec2 credentials you will also need credentials containing the role of **heat_stack_owner**. Once you sourced your credentials for CLI you can generate ec2 credentials by the following command: ``` $ openstack ec2 credentials create @@ -321,8 +321,8 @@ Added `swift-s3` successfully. $ MC ls swift-s3 [2021-04-19 15:13:45 CEST] 0B freznicek-test/ ``` -s3cmd client requires configuration file which looks like: -In this case please open your file with credentials which will look like this: +s3cmd client requires a configuration file that looks like this: +In this case please open your file with credentials that will look like this: ``` [default] access_key = 896**************************651 diff --git a/content/cloud/best-practices/index.md b/content/cloud/best-practices/index.md index 8e17cbb..abb503e 100644 --- a/content/cloud/best-practices/index.md +++ b/content/cloud/best-practices/index.md @@ -15,9 +15,9 @@ In most cases even when you build huge cloud infrastructure you should be able t  -There are following project VMs in the cloud architecture: +There are the following project VMs in the cloud architecture: -| VM name | VM orerating system | VM IP addresses | VM flavor type | VM description | +| VM name | VM operating system | VM IP addresses | VM flavor type | VM description | | :--- | :---: | :-----------: | :------: | :------: | | freznicek-cos8 | centos-8-x86_64 | 172.16.0.54, 147.251.21.72 (public) | standard.medium | jump host | | freznicek-ubu | ubuntu-focal-x86_64 | 172.16.1.67 | standard.medium | internal VM | @@ -52,7 +52,7 @@ sshuttle -r centos@147.251.21.72 172.16.0.0/22 ```sh # terminal B -# Access all VMs allocated in project in 172.16.0.0/22 subnet (a C5 instance shown on picture) +# Access all VMs allocated in the project in 172.16.0.0/22 subnet (a C5 instance shown on picture) $ ssh debian@172.16.0.158 uname -a Linux freznicek-deb10 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux @@ -63,11 +63,11 @@ Hello, world, cnt=1, hostname=freznicek-ubu ## How to store project data -Every project generates an amount of data which needs to be stored. There are options (sorted by preference): +Every project generates an amount of data that needs to be stored. There are options (sorted by preference): * as objects or files in a [S3 compatible storage](https://en.wikipedia.org/wiki/Amazon_S3) * S3 compatible storage may be requested as separate cloud storage resource ([OpenStack Swift storage + S3 API](https://docs.openstack.org/swift/latest/s3_compat.html)) * S3 storage may be also easily launched on one of the project VMs ([minio server](https://github.com/minio/minio)) - * as files on + * as files on * separate (ceph) volume * virtual machine disk volume (i.e. no explicit volume for the project data) * as objects or files in the [OpenShift Swift storage](https://docs.openstack.org/swift/train/admin/objectstorage-intro.html) @@ -76,31 +76,31 @@ MetaCentrum Cloud stores raw data: * in ceph cloud storage on rotation disks (SSDs will be available soon) * in hypervisor (bare metal) disks (rotational, SSD, SSD NVMe) -We encourage all users to backup important data themselves while we work on cloud native backup solution. +We encourage all users to backup important data themselves while we work on a cloud-native backup solution. ## How to compute (scientific) tasks Your application may be: * `A.` single instance application, running on one of cloud computation resources - * `B.` multi-instance application with messaging support (MPI), where all instances run on same cloud computation resource - * `C.` true distributed computing, where application runs in jobs scheduled to multiple cloud computation resources + * `B.` multi-instance application with messaging support (MPI), where all instances run on the same cloud computation resource + * `C.` true distributed computing, where the application runs in jobs scheduled to multiple cloud computation resources -Applications running in single cloud resource (`A.` and `B.`) are direct match for MetaCentrum Cloud OpenStack. Distributed applications (`C.`) are best handled by [MetaCentrum PBS system](https://metavo.metacentrum.cz/cs/state/personal). +Applications running in a single cloud resource (`A.` and `B.`) are a direct match for MetaCentrum Cloud OpenStack. Distributed applications (`C.`) are best handled by [MetaCentrum PBS system](https://metavo.metacentrum.cz/cs/state/personal). ## How to create and maintain cloud resources -Your project is computed within MetaCentrum Cloud Openstack project where you can claim MetaCentrum Cloud Openstack resources (for example virtual machine, floating IP, ...). There are multiple ways how to set-up the MetaCentrum Cloud Openstack resources: +Your project is computed within the MetaCentrum Cloud Openstack project where you can claim MetaCentrum Cloud Openstack resources (for example virtual machine, floating IP, ...). There are multiple ways how to set up the MetaCentrum Cloud Openstack resources: * manually using [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz) (Openstack Horizon) * automated approaches * [terraform](https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs) ([example project](https://github.com/terraform-provider-openstack/terraform-provider-openstack/tree/main/examples/app-with-networking)) * ansible * [openstack heat](https://docs.openstack.org/heat/train/template_guide/hello_world.html) -If your project infrastructure (MetaCentrum Cloud Openstack resources) within cloud is static you may select manual approach with [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz). There are projects which need to allocate MetaCentrum Cloud Openstack resources dynamically, in such cases we strongly encourage automation even at this stage. +If your project infrastructure (MetaCentrum Cloud Openstack resources) within the cloud is static you may select a manual approach with [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz). There are projects which need to allocate MetaCentrum Cloud Openstack resources dynamically, in such cases we strongly encourage automation even at this stage. ## How to transfer your work to cloud resources and make it up-to-date -There are several options how to transfer project to cloud resources: +There are several options how to transfer the project to cloud resources: * manually with `scp` * automatically with `ansible` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-native/tasks/init.yml#L29-47)) * automatically with `terraform` @@ -110,22 +110,22 @@ There are several options how to transfer project to cloud resources: ### ssh to cloud VM resources and manual update -In this scenario you log to your cloud VM and perform all needed actions manually. This approach does not scale well, is not effective enough as different users may configure cloud VM resources different ways resulting sometimes in different resource behavior. +In this scenario, you log into your cloud VM and perform all needed actions manually. This approach does not scale well, is not effective enough as different users may configure cloud VM resources in different ways resulting sometimes in different resource behavior. ### automated work transfer and synchronization with docker (or podman) -There are automation tools which may help you to ease your cloud usage: +There are automation tools that may help you to ease your cloud usage: * ansible and/or terraform * container runtime engine (docker, podman, ...) -Ansible is cloud automation tool which helps you with: +Ansible is a cloud automation tool that helps you with: * keeping your VM updated * automatically migrating your applications or data to/from cloud VM -Container runtime engine helps you to put your into a container stored in a container registry. -Putting your work into container has several advantages: - * share the code including binaries in consistent environment (even across different Operating Systems) +Container runtime engine helps you to put yours into a container stored in a container registry. +Putting your work into a container has several advantages: + * share the code including binaries in a consistent environment (even across different Operating Systems) * avoids application [re]compilation in the cloud * your application running in the container is isolated from the host's container runtime so * you may run multiple instances easily @@ -136,7 +136,7 @@ As a container registry we suggest either: * public quay.io ([you need to register for free first](https://quay.io/signin/)) * private Masaryk University [registry.gitlab.ics.muni.cz:443](registry.gitlab.ics.muni.cz:443) -Example of such approach is demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). +An example of such an approach is demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). ## How to receive data from project's experiments to your workstation @@ -151,12 +151,12 @@ It certainly depends on how your data are stored, the options are: ## How to make your application in the cloud highly available -Let's assume your application is running in multiple instances in cloud already. -To make you application highly available (HA) you need to +Let's assume your application is running in multiple instances in the cloud already. +To make your application highly available (HA) you need to * run the application instances on different cloud resources - * use MetaCentrum Cloud load-balancer component (based on [OpenStack Octavia](https://docs.openstack.org/octavia/train/reference/introduction.html#octavia-terminology)) which is goint to balance traffic to one of the app's instances. + * use MetaCentrum Cloud load-balancer component (based on [OpenStack Octavia](https://docs.openstack.org/octavia/train/reference/introduction.html#octavia-terminology)) which is going to balance traffic to one of the app's instances. -Your application surely need Fully Qualified Domain Name (FQDN) address to become popular. Setting FQDN is done on the public floating IP linked to the load-balancer. +Your application surely needs a Fully Qualified Domain Name (FQDN) address to become popular. Setting FQDN is done on the public floating IP linked to the load-balancer. ## Cloud project example and workflow recommendations @@ -181,7 +181,7 @@ The project recommendations are: We recommend every project defines cloud usage workflow which may consist of: 1. Cloud resource initialization, performing - * cloud resource update to latest state + * cloud resource update to the latest state * install necessary tools for project compilation and execution * test container infrastructure (if it is used) * transfer project files if need to be compiled @@ -192,13 +192,13 @@ We recommend every project defines cloud usage workflow which may consist of: 1. Download project data from cloud to workstation (for further analysis or troubleshooting) * download of project data from cloud to user's workstation 1. Cloud resource destroy - + ## Road-map to effective cloud usage -A project automation is usually done in CI/CD pipelines. Read [Gitlab CI/CD article](https://docs.gitlab.com/ee/ci/introduction/) for more details. +Project automation is usually done in CI/CD pipelines. Read [Gitlab CI/CD article](https://docs.gitlab.com/ee/ci/introduction/) for more details.  -Following table shows the different cloud usage phases: +The following table shows the different cloud usage phases: | Cloud usage phase | Cloud resource management | Project packaging | Project deployment | Project execution | Project data synchronization | Project troubleshooting | | :--- | :---: | :-----------: | :------: | :------------: | :------------: | :------------: | @@ -209,23 +209,18 @@ Following table shows the different cloud usage phases: -## How to convert legacy application into a container for a cloud? +## How to convert the legacy application into a container for a cloud? -Containerization of applications is one of the best practices when you want to share your application and execute in a cloud. Read about [the benefits](https://cloud.google.com/containers). +Containerization of applications is one of the best practices when you want to share your application and execute it in the cloud. Read about [the benefits](https://cloud.google.com/containers). -Application containerization process consists of following steps: +The application containerization process consists of the following steps: * Select a container registry (where container images with your applications are stored) * Publicly available registries like [quay.io](https://quay.io) are best as everyone may receive your application even without credentials * Your project applications should be containerized via creating a `Dockerfile` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/Dockerfile)) * Follow [docker guide](https://www.freecodecamp.org/news/a-beginners-guide-to-docker-how-to-create-your-first-docker-application-cc03de9b639f/) if you are not familiar with `Dockerfile` syntax - * If your project is huge and contains multiple applications, then it is recommended to divide them in few parts by topic each part building separate container. - * Project CI/CD jobs should build applications, create container image[s] and finally release (push) container image[s] with applications to container registry + * If your project is huge and contains multiple applications, then it is recommended to divide them into few parts by topic each part building a separate container. + * Project CI/CD jobs should build applications, create container image[s] and finally release (push) container image[s] with applications to the container registry * Everyone is then able to use your applications (packaged in a container image) regardless of which Operating System (OS) he or she uses. Container engine (docker, podman, ...) is available for all mainstream OSes. * Cloud resources are then told to pull and run your container image[s]. ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-container/tasks/deploy.yml#L11-28)) Learn best-practices on our cloud example [project `cloud-estimate-pi`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi). - - - - - diff --git a/content/cloud/cli/index.md b/content/cloud/cli/index.md index 7bdf5cc..fcceda2 100644 --- a/content/cloud/cli/index.md +++ b/content/cloud/cli/index.md @@ -56,12 +56,12 @@ Add the following line to the **clouds.yaml** file: ## Creating a key-pair -You can either get your private key from dashboard or you can use **ssh-keygen** command to create new private key: +You can either get your private key from the dashboard or you can use **ssh-keygen** command to create a new private key: ``` ssh-keygen -b 4096 ``` -then you will be asked to specify output file and passphrase for your key. +then you will be asked to specify the output file and passphrase for your key. 1. Assuming your ssh public key is stored in `~/.ssh/id_rsa.pub` @@ -69,7 +69,7 @@ then you will be asked to specify output file and passphrase for your key. openstack keypair create --public-key ~/.ssh/id_rsa.pub my-key1 ``` -## Create security group +## Create a security group 1. Create: ``` openstack security group create my-security-group @@ -86,9 +86,9 @@ openstack security group rule create --description "Permit ICMP (any)" --remote- openstack security group show my-security-group ``` -## Create network +## Create a network -1. Create network + subnet (from auto-allocated pool) +1. Create network + subnet (from an auto-allocated pool) ``` openstack network create my-net1 openstack subnet create --network my-net1 --subnet-pool private-192-168 my-sub1 @@ -102,9 +102,9 @@ openstack subnet create --network my-net1 --subnet-pool private-192-168 my-sub1 ``` openstack router create my-router1 ``` -Current router have no ports, which makes it pretty useless, we need to create at least 2 interfaces (external and internal) +The current router has no ports, which makes it pretty useless, we need to create at least 2 interfaces (external and internal) -3. Set external network for router (let us say public-muni-147-251-124), and the external port will be created automatically: +3. Set external network for the router (let us say public-muni-147-251-124), and the external port will be created automatically: ``` openstack router set --external-gateway public-muni-147-251-124 my-router1 ``` @@ -114,7 +114,7 @@ openstack router set --external-gateway public-muni-147-251-124 my-router1 GW_IP=$(openstack subnet show my-sub1 -c gateway_ip -f value) ``` -5. Create internal port for router (gateway for the network my-net1): +5. Create an internal port for the router (gateway for the network my-net1): ``` openstack port create --network my-net1 --disable-port-security --fixed-ip ip-address=$GW_IP my-net1-port1-gw ``` @@ -198,7 +198,7 @@ $ openstack router set --external-gateway public-cesnet-78-128-251 0bd0374d-b62 Skipping this section can lead to unreversible loss of data {{</hint>}} -Volumes are create automatically when creating an instance in GUI, but we need to create them manually in case of CLI +Volumes are created automatically when creating an instance in GUI, but we need to create them manually in the case of CLI 1. Create bootable volume from image(e.g. centos): ``` @@ -207,13 +207,13 @@ openstack volume create --image "centos-7-1809-x86_64" --size 40 my_vol1 ## Create server -1. Create instance: +1. Create the instance: ``` openstack server create --flavor "standard.small" --volume my_vol1 \ --key-name my-key1 --security-group my-security-group --network my-net1 my-server1 ``` -## Floating ip address management +## Floating IP address management ### Creating and assigning new FIP @@ -253,7 +253,7 @@ $ openstack floating ip create public-cesnet-78-128-251 $ openstack server add floating ip net-test1 78.128.251.27 ``` -### Remove existing floating ip +### Remove existing floating IP 1. List your servers: @@ -266,7 +266,7 @@ $ openstack server list +--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+ ``` -2. remove floating ips: +2. remove floating IPs: ``` $ openstack server remove floating ip net-test 147.251.124.248 @@ -278,4 +278,4 @@ $ openstack floating ip delete 147.251.124.248 You can inspect cloud tools [here](/documentation/cloud/tools) ## Full Reference -See [OpenStack CLI Documentation](https://docs.openstack.org/python-openstackclient/train/). \ No newline at end of file +See [OpenStack CLI Documentation](https://docs.openstack.org/python-openstackclient/train/). diff --git a/content/cloud/contribute/index.md b/content/cloud/contribute/index.md index f926147..1f38033 100644 --- a/content/cloud/contribute/index.md +++ b/content/cloud/contribute/index.md @@ -5,11 +5,11 @@ draft: false weight: 110 --- -We use open-source [Hugo](https://gohugo.io/) project to generate the documentation. +We use the open-source [Hugo](https://gohugo.io/) project to generate the documentation. ## Requirements -[Install](https://gohugo.io/getting-started/installing/) Hugo +[Install](https://gohugo.io/getting-started/installing/) Hugo ## Work-flow Overview @@ -39,7 +39,7 @@ git checkout -b my_change # in `documentation` hugo --config config-dev.toml serve ``` -> Edits will be show live in your browser window, no need to restart the server. +> Edits will be shown live in your browser window, no need to restart the server. ### Commit and Push Changes ```bash @@ -53,13 +53,13 @@ Create a *Merge Request* via [GitLab @ ICS MU](https://gitlab.ics.muni.cz/cloud/ ## Tips ### Disable table of content -Table of content is generated automatically for every page. To hide table of contents, put this line to page's header: +The table of content is generated automatically for every page. To hide the table of contents, put this line to the page's header: ``` disableToc: true ``` -### Hide from menu -To hide page from menu, add this line to page's header: +### Hide from the menu +To hide a page from the menu, add this line to the page's header: ``` GeekdocHidden: true ``` @@ -72,4 +72,4 @@ some text {{</hint>}} you can use *short codes*. -Please see [theme documentation](https://geekdocs.de/shortcodes/hints/). \ No newline at end of file +Please see [theme documentation](https://geekdocs.de/shortcodes/hints/). diff --git a/content/cloud/faq/index.md b/content/cloud/faq/index.md index df1bb9a..ed55902 100644 --- a/content/cloud/faq/index.md +++ b/content/cloud/faq/index.md @@ -11,7 +11,7 @@ Read our [cloud best-practice tips](/documentation/cloud/register). ## What to expect from the cloud and cloud computing -[Migration of Legacy Systems to Cloud Computing](https://www.researchgate.net/publication/280154501_Migration_of_Legacy_Systems_to_Cloud_Computing) article gives the overwiew what to expect when joining a cloud with personal legacy application. +[Migration of Legacy Systems to Cloud Computing](https://www.researchgate.net/publication/280154501_Migration_of_Legacy_Systems_to_Cloud_Computing) article gives an overview of what to expect when joining a cloud with a personal legacy application. ### What are the cloud computing benefits? @@ -44,33 +44,32 @@ using only *public-cesnet-78-128-251* and *private-muni-10-16-116* for group pro Follow instructions at [changing the external network](/documentation/cloud/network) in order to change your public network. ## Issues with network stability in Docker -OpenStack instances use 1442 bytes MTU (maximum transmission unit) instead of standard 1500 bytes MTU. Instance itself is -able to setup correct MTU with its counterpart via Path MTU Discovery. Docker needs MTU setup explicitly. Refer documentation for setting up +OpenStack instances use 1442 bytes MTU (maximum transmission unit) instead of standard 1500 bytes MTU. The instance itself can set up correct MTU with its counterpart via Path MTU Discovery. Docker needs MTU set up explicitly. Refer documentation for setting up 1442 MTU in [Docker](https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/) or [Kubernetes](https://docs.projectcalico.org/v3.5/usage/configuration/mtu). ## Issues with proxy in private networks -OpenStack instances can either use public or private networks. If you are using a private network and you need to access to the internet for updates etc., +OpenStack instances can either use public or private networks. If you are using a private network and you need to access the internet for updates etc., you can use muni proxy server *proxy.ics.muni.cz*. This server only supports HTTP protocol, not HTTPS. To configure it you must also consider what applications -will be using it, because they can have their own configuration files, where this information must be set. If so, you must find particular setting and set up there -mentioned proxy server with port 3128. Mostly applications use following setting, which can be done by editing file `/etc/environment` where you need to add a line -`http_proxy="http://proxy.ics.muni.cz:3128/"`. And then you must either restart your machine or use command `source /etc/environment`. +will be using it because they can have their configuration files, where this information must be set. If so, you must find the particular setting and set up there +mentioned proxy server with port 3128. Most applications use the following setting, which can be done by editing file `/etc/environment` where you need to add a line +`http_proxy="http://proxy.ics.muni.cz:3128/"`. And then you must either restart your machine or use the command `source /etc/environment`. ## How many floating IPs does my group project need? -One floating IP per project should generally suffice. All OpenStack instances are deployed on top of internal OpenStack networks. These internal networks are not by default accessible from outside of OpenStack, but instances on top of same internal network can communicate with each other. +One floating IP per project should generally suffice. All OpenStack instances are deployed on top of internal OpenStack networks. These internal networks are not by default accessible from outside of OpenStack, but instances on top of the same internal network can communicate with each other. -To access internet from an instance, or access instance from the internet, you could allocate floating public IP per instance. Since there are not many public IP addresses available and assigning public IP to every instance is not security best practise, both in public and private clouds these two concepts are used: -* **internet access is provided by virtual router** - all new OpenStack projects are created with *group-project-network* internal network connected to virtual router with public IP as a gateway. Every instance created with *group-project-network* can access internet through NAT provided by it's router by default. +To access the internet from an instance, or access an instance from the internet, you could allocate floating public IP per instance. Since there are not many public IP addresses available and assigning public IP to every instance is not a security best practice, both in public and private clouds these two concepts are used: +* **internet access is provided by virtual router** - all new OpenStack projects are created with *group-project-network* internal network connected to a virtual router with public IP as a gateway. Every instance created with *group-project-network* can access the internet through NAT provided by its router by default. * **accessing the instances:** * **I need to access instances by myself** - best practice for accessing your instances is creating one server with floating IP called [jump host](https://en.wikipedia.org/wiki/Jump_server) and then access all other instances through this host. Simple setup: - 1. Create instance with any Linux. + 1. Create an instance with any Linux. 2. Associate floating IP with this instance. 3. Install [sshuttle](https://github.com/sshuttle/sshuttle) on your client. - 4. `sshuttle -r root@jump_host_fip 192.168.0.1/24`. All your traffic to internal OpenStack network *192.168.0.1/24* is now tunneled through jump host. - * **I need to serve content (e.g. webservice) to other users** - public and private clouds provide LBaaS (Load-Balancer-as-a-Service) service, which proxies users traffic to instances. MetaCentrum Cloud provides this service in experimental mode - [documentation](/documentation/cloud/gui#lbaas) + 4. `sshuttle -r root@jump_host_fip 192.168.0.1/24`. All your traffic to the internal OpenStack network *192.168.0.1/24* is now tunneled through the jump host. + * **I need to serve content (e.g. web service) to other users** - public and private clouds provide LBaaS (Load-Balancer-as-a-Service) service, which proxies users traffic to instances. MetaCentrum Cloud provides this service in experimental mode - [documentation](/documentation/cloud/gui#lbaas) -In case, that these options are not suitable for you usecase, you can still request multiple floating IPs. +In case, that these options are not suitable for your use case, you can still request multiple floating IPs. -## I can't log into openstack, how is that possible ? -The most common reason why you can't log into your openstack account is because your membership in Metacentrum has expired. To extend your membership in Metacentrum, -you can visit [https://metavo.metacentrum.cz/en/myaccount/prodlouzeni](https://metavo.metacentrum.cz/en/myaccount/prodlouzeni). \ No newline at end of file +## I can't log into OpenStack, how is that possible? +The most common reason why you can't log into your OpenStack account is that your membership in Metacentrum has expired. To extend your membership in Metacentrum, +you can visit [https://metavo.metacentrum.cz/en/myaccount/prodlouzeni](https://metavo.metacentrum.cz/en/myaccount/prodlouzeni). diff --git a/content/cloud/network/index.md b/content/cloud/network/index.md index 761a2b4..2b39cac 100644 --- a/content/cloud/network/index.md +++ b/content/cloud/network/index.md @@ -6,7 +6,7 @@ draft: false -For the networking in Cloud2 metacentrum we need to distinguish following scenarios +For the networking in Cloud2 metacentrum, we need to distinguish the following scenarios * personal project * group project. @@ -15,9 +15,9 @@ For the networking in Cloud2 metacentrum we need to distinguish following scenar **WARNING:** Please read the following rules: - 1. If you are using a [PERSONAL](/documentation/cloud/register/#personal-project) project you have to use the `78-128-250-pers-proj-net` network to make your instance accesible from external network (e.g. Internet). Use `public-cesnet-78-128-250-PERSONAL` for FIP allocation, FIPs from this pool will be periodically released. - 2. If you are using a [GROUP](/documentation/cloud/register/#group-project) project you may choose from the `public-cesnet-78-128-251-GROUP`, `public-muni-147-251-124-GROUP` or any other [GROUP](/documentation/cloud/register/#group-project) network for FIP allocation to make your instance accesible from external network (e.g. Internet). - 3. Violation of the network usage may lead into resource removal and reducing of the quotas assigned. + 1. If you are using a [PERSONAL](/documentation/cloud/register/#personal-project) project you have to use the `78-128-250-pers-proj-net` network to make your instance accessible from an external network (e.g. Internet). Use `public-cesnet-78-128-250-PERSONAL` for FIP allocation, FIPs from this pool will be periodically released. + 2. If you are using a [GROUP](/documentation/cloud/register/#group-project) project you may choose from the `public-cesnet-78-128-251-GROUP`, `public-muni-147-251-124-GROUP` or any other [GROUP](/documentation/cloud/register/#group-project) network for FIP allocation to make your instance accessible from external network (e.g. Internet). + 3. Violation of network usage may lead to resource removal and reduction of the quotas assigned. {{< /hint >}} @@ -25,23 +25,23 @@ For the networking in Cloud2 metacentrum we need to distinguish following scenar ### Personal Project networking -Is currently limited to the common internal network. The network in which you should start your machine is called `78-128-250-pers-proj-net` and is selected by default when using dashboard to start a machine (if you do not have another network created). The floating IP adresses you need to access a virtual machine is `public-cesnet-78-128-250-PERSONAL`. Any other allocated floatin IP address and `external gateway` will be deleted. You cannot use router with the personal project and any previously created routers will be deleted. +Is currently limited to the common internal network. The network in which you should start your machine is called `78-128-250-pers-proj-net` and is selected by default when using a dashboard to start a machine (if you do not have another network created). The floating IP address you need to access a virtual machine is located in`public-cesnet-78-128-250-PERSONAL` pool. Any other allocated floating IP address and `external gateway` will be deleted. You cannot use the router with the personal project and any previously created routers will be deleted. ### Group project -In group project situation is rather different. You cannot use the same approach as personal project (resources allocated in previously mentioned networks will be periodically released). For FIP you need to allocate from pools with `-GROUP` suffix (namely `public-cesnet-78-128-251-GROUP`, `public-muni-147-251-21-GROUP` or `public-muni-147-251-124-GROUP`). +In a group, the project situation is rather different. You cannot use the same approach as a personal project (resources allocated in previously mentioned networks will be periodically released). For FIP you need to allocate from pools with `-GROUP` suffix (namely `public-cesnet-78-128-251-GROUP`, `public-muni-147-251-21-GROUP` or `public-muni-147-251-124-GROUP`). {{< hint info >}} **NOTICE** -If you use MUNI account, you can use private-muni-10-16-116 and log into the network via MUNI VPN or you can set up Proxy networking, which is described +If you use a MUNI account, you can use private-muni-10-16-116 and log into the network via MUNI VPN or you can set up Proxy networking, which is described [here](/documentation/cloud/network/#proxy-networking) {{< /hint >}} #### Virtual Networks -MetaCentrum Cloud offers software-defined networking as one of its services. Users have the ability to create their own -networks and subnets, connect them with routers, and set up tiered network topologies. +MetaCentrum Cloud offers software-defined networking as one of its services. Users can create their own +networks and subnets, connect them with routers and set up tiered network topologies. Prerequisites: * Basic understanding of routing @@ -52,16 +52,16 @@ For details, refer to [the official documentation](https://docs.openstack.org/ho #### Network creation -For group project you need to create internal network first, you may use autoallocated pool for subnet autocreation. +For a group project, you need to create an internal network first, you may use auto allocated pool for subnet auto-creation. Navigate yourself towards **Network > Networks** in the left menu and click on the **Create Network** on the right side of the window. This will start an interactive dialog for network creation.   Inside the interactive dialog: 1. Type in the network name  -2. Move to the **Subnet** section either by clicking next or by clicking on the **Subnet** tab. You may choose to enter network range manually (recommended for advanced users in order to not interfere with the public IP address ranges), or select **Allocate Network Address from a pool**. In the **Address pool** section select a `private-192-168`. Select Network mask which suits your needs (`27` as default can hold up to 29 machines, use IP calculator if you are not sure). +2. Move to the **Subnet** section either by clicking next or by clicking on the **Subnet** tab. You may choose to enter the network range manually (recommended for advanced users to not interfere with the public IP address ranges), or select **Allocate Network Address from a pool**. In the **Address pool** section select a `private-192-168`. Select Network mask which suits your needs (`27` as default can hold up to 29 machines, use IP calculator if you are not sure).  -3. For the last tab **Subnet Details** just check that a DNS is present and DHCP box is checked, alternatively you can create the allocation pool or specify static routes in here (for advanced users). +3. For the last tab **Subnet Details** just check that a DNS is present and the DHCP box is checked, alternatively you can create the allocation pool or specify static routes in here (for advanced users).  @@ -73,28 +73,28 @@ If you want to use CLI to create network, please go [here](/documentation/cloud/ #### Proxy networking -In your OpenStack instances you can you private or public networks. If you use private network and you need to access to the internet for updates etc., +In your OpenStack instances, you can you private or public networks. If you use a private network and you need to access the internet for updates etc., you can visit following [link](/documentation/cloud/faq/#issues-with-proxy-in-private-networks), where it is explained, how to set up Proxy connection. #### Setup Router gateway (Required for Group projects) Completing [Create Virtual Machine Instance](/documentation/cloud/quick-start/#create-virtual-machine-instance) created instance connected -to software defined network represented by internal network, subnet and router. Router has by default gateway address +to a software-defined network represented by the internal network, subnet, and router. The router has by default a gateway address from External Network chosen by cloud administrators. You can change it to any External Network with [GROUP](/documentation/cloud/register/#group-project) suffix, that is visible to you (e.g. **public-muni-147-251-124-GROUP** or **public-cesnet-78-128-251-GROUP**). Usage of External Networks with suffix PERSONAL (e.g. **public-cesnet-78-128-250-PERSONAL**) is discouraged. IP addresses from PERSONAL segments will be automatically released from Group projects. For changing gateway IP address follow these steps: -1. In **Network > Routers**, click the **Set Gateway** button next to router. -If router exists with another settings, then use button Clear Gateway, confirm Clear Gateway. -If router isn't set then use button Create router and choose network. +1. In **Network > Routers**, click the **Set Gateway** button next to the router. +If the router exists with other settings, then use the button Clear Gateway and then confirm Clear Gateway. +If the router isn't set then use the button Create router and choose the network. 2. From list of External Network choose **public-cesnet-78-128-251-GROUP**, **public-muni-147-251-124-GROUP** or any other [GROUP](/documentation/cloud/register/#group-project) network you see.  -Router is setup with persistent gateway. +The router is set up with the persistent gateway. #### Router creation @@ -131,7 +131,7 @@ Routers can also be used to route traffic between internal networks. This is an {{< hint danger >}} **WARNING** -There is a limited number of Floating IP adresses. So please before you ask for more Floating IP address, visit and read [FAQ](/documentation/cloud/faq/#how-many-floating-ips-does-my-group-project-need) +There is a limited number of Floating IP addresses. So please before you ask for more Floating IP address, visit and read [FAQ](/documentation/cloud/faq/#how-many-floating-ips-does-my-group-project-need) {{< /hint >}} @@ -139,13 +139,12 @@ There is a limited number of Floating IP adresses. So please before you ask for To make an instance accessible from external networks (e.g., The Internet), a so-called Floating IP Address has to be associated with it. -1. In **Project > Network > Floating IPs**, select **Allocate IP to Project**. Pick an IP pool from which to allocate - the address. Click on **Allocate IP**. +1. In **Project > Network > Floating IPs**, select **Allocate IP to Project**. Pick an IP pool from which to allocate the address. Click on **Allocate IP**. {{< hint info >}} **NOTICE** -In case of group projects when picking an IP pool from which to allocate a floating IP address, please, keep in mind that you have to allocate +In the case of group projects when picking an IP pool from which to allocate a floating IP address, please, keep in mind that you have to allocate an address in the pool connected to your virtual router. {{< /hint >}} @@ -165,7 +164,7 @@ and the IP in question will remain allocated to you and consume your Floating IP 1. In **Project > Compute > Instances**, select **Associate Floating IP** from the **Actions** drop-down menu for the given instance. -2. Select IP address and click on **Associate**. +2. Select an IP address and click on **Associate**.  @@ -178,16 +177,16 @@ If you want to use CLI to manage FIP, please go [here](/documentation/cloud/cli/ ## Change external network in GUI -Following chapter covers the problem of changing the external network via GUI or CLI. +The following chapter covers the problem of changing the external network via GUI or CLI. ### Existing Floating IP release -First you need to release existing Floating IPs from your instances - go to **Project > Compute > Instances**. Click on the menu **Actions** on the instance you whish to change and **Disassociate Floating IP** and specify that you wish to **Release Floating IP** WARN: After this action your project will no longer be able to use the floating IP address you released. Confirm that you wish to disassociate the floating IP by clicking on the **Disassociate** button. When you are done with all instances connected to your router you may continue with the next step. +First, you need to release existing Floating IPs from your instances - go to **Project > Compute > Instances**. Click on the menu **Actions** on the instance you wish to change and **Disassociate Floating IP** and specify that you wish to **Release Floating IP** WARN: After this action, your project will no longer be able to use the floating IP address you released. Confirm that you wish to disassociate the floating IP by clicking on the **Disassociate** button. When you are done with all instances connected to your router you may continue with the next step.  ### Clear Gateway -Now, you should navigate yourself to the **Project > Network > Routers**. Click on the action **Clear Gateway** of your router. This action will disassociate the external network from your router, so your machines will not longer be able to access Internet. If you get an error go back to step 1 and **Disassociate your Floating IPs**. +Now, you should navigate yourself to the **Project > Network > Routers**. Click on the action **Clear Gateway** of your router. This action will disassociate the external network from your router, so your machines will no longer be able to access the Internet. If you get an error go back to step 1 and **Disassociate your Floating IPs**.  ### Set Gateway @@ -195,19 +194,19 @@ Now, you should navigate yourself to the **Project > Network > Routers**. 1. Now, you can set your gateway by clicking **Set Gateway**.  -2. Choose network you desire to use (e.g. **public-cesnet-78-128-251**) and confirm. +2. Choose the network you desire to use (e.g. **public-cesnet-78-128-251**) and confirm.  ### Allocate new Floating IP(s) {{< hint danger >}} **WARNING** -New floating IP address for router must be from same network pool which was selected as new gateway. +The new floating IP address for the router must be from the same network pool which was selected as the new gateway. {{< /hint >}} 1. Go to **Project > Network > Floating IPs** and click on the **Allocate IP to Project** button. Select **Pool** with the same value as the network you chose in the previous step and confirm it by clicking **Allocate IP**  -2. Now click on the **Associate** button next to the Floating IP you just created. Select **Port to be associated** with desired instance. Confirm with the **Associate** button. Repeat this section for all your machines requiring a Floating IP. +2. Now click on the **Associate** button next to the Floating IP you just created. Select **Port to be associated** with the desired instance. Confirm with the **Associate** button. Repeat this section for all your machines requiring a Floating IP.  diff --git a/content/cloud/news/index.md b/content/cloud/news/index.md index 3bded7a..0c7863d 100644 --- a/content/cloud/news/index.md +++ b/content/cloud/news/index.md @@ -24,9 +24,9 @@ GeekdocHidden: true * hpc.ics-gladosag-full * csirtmu.tiny1x2 -None of the parameters were decreased but increased. Updated parameters were Net througput, IOPS and Disk througput. Existing instances will have the previous parameters so if you want to get new parameters, **make a data backup** and rebuild your instance You can check list of flavors [here](/documentation/cloud/flavors). +None of the parameters were decreased but increased. Updated parameters were net throughput, IOPS, and disk throughput. Existing instances will have the previous parameters so if you want to get new parameters, **make a data backup** and rebuild your instance You can checklist of flavors [here](/documentation/cloud/flavors). -**2021-04-13** OpenStack image `centos-8-1-1911-x86_64_gpu` deprecation in favor of `centos-8-x86_64_gpu`. Deprecated image will be still available for existing VM instances, but will be moved from public to community images in about 2 months. +**2021-04-13** OpenStack image `centos-8-1-1911-x86_64_gpu` deprecation in favor of `centos-8-x86_64_gpu`. The deprecated image will be still available for existing VM instances but will be moved from public to community images in about 2 months. **2021-04-05** OpenStack images renamed diff --git a/content/cloud/putty/index.md b/content/cloud/putty/index.md index ebfe342..cca5ce7 100644 --- a/content/cloud/putty/index.md +++ b/content/cloud/putty/index.md @@ -9,9 +9,9 @@ draft: false [PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/faq.html#faq-what) is a client program for the SSH on Windows OS. ## Windows PuTTY Installer -We recommend to download [Windows Installer](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) with PuTTY utilities as: -* Pageant (SSH authentication agent) - store private key in memory without need to retype a passphrase on every login -* PuTTYgen (PuTTY key generator) - convert OpenSSH format of id_rsa to PuTTY ppk private key and so on and so forth +We recommend downloading [Windows Installer](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html) with PuTTY utilities as: +* Pageant (SSH authentication agent) - store the private key in memory without the need to retype a passphrase on every login +* PuTTYgen (PuTTY key generator) - convert OpenSSH format of id_rsa to PuTTY ppk private key and so on ## PuTTY - Connect to the Instance @@ -23,7 +23,7 @@ We recommend to download [Windows Installer](https://www.chiark.greenend.org.uk/ * Return to *Session page* and Save selected configuration with **Save** button * Now you can log in using **Open** button * Enter passphrase for selected private key file if [Pageant SSH authentication agent](#pageant-ssh-agent) is not used - * We recomend using Pageant SSH Agent to store private key in memory without need to retype a passphrase on every login + * We recommend using Pageant SSH Agent to store the private key in memory without the need to retype a passphrase on every login  @@ -34,9 +34,9 @@ We recommend to download [Windows Installer](https://www.chiark.greenend.org.uk/ * Locate Pageant icon in the Notification Area and double click on it * Use **Add Key** button * Browse files and select your PuTTY Private Key File in format ppk -* Use **Open** buton -* Eter passphrase and confirm **OK** button -* Your private key is now located in the memory without need to retype a passphrase on every login +* Use **Open** button +* Enter the passphrase and confirm **OK** button +* Your private key is now located in the memory without the need to retype a passphrase on every login  @@ -48,7 +48,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and ## Convert OpenSSH format to PuTTY ppk format * Run PuTTYgen, in the menu Conversion -> Import key browse and load your OpenSSH format id_rsa private key using your passphrase -* Save PuTTY ppk private key using button **Save private key**, browse destination for PuTTY format id_rsa.ppk and save file +* Save PuTTY ppk private key using button **Save private key**, browse destination for PuTTY format id_rsa.ppk, and save file  @@ -64,7 +64,7 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and ## Change Password for Existing Private Key Pair * Load your existing private key using button **Load**, confirm opening using your passphrase -* Enter new passphrase in field *Key passphrase* and confirm again in field *Confirm passphrase* +* Enter a new passphrase in the field *Key passphrase* and confirm again in the field *Confirm passphrase* * Save changes using button **Save private key**  @@ -73,10 +73,10 @@ PuTTYgen is the PuTTY key generator. You can load in an existing private key and ## Generate a New Key Pair * Start with **Generate button** -* Generate some randomness by moving your mouse over dialog +* Generate some randomness by moving your mouse over the dialog * Wait while the key is generated * Enter a comment for your key using "your-email@address" -* Enter key passphrase, confirm key passphrase +* Enter a key passphrase, confirm the key passphrase * Save your new private key in the "id_rsa.ppk" format using the **Save private key** button * Save the public key with the **Save public key** button diff --git a/content/cloud/quick-start/index.md b/content/cloud/quick-start/index.md index fda1ed5..eb0036e 100644 --- a/content/cloud/quick-start/index.md +++ b/content/cloud/quick-start/index.md @@ -36,7 +36,7 @@ International users may choose <strong>EGI Check-in</strong>, <strong>DEEP AAI</ {{< hint danger >}} **WARNING** -If you use multiple accounts, you should use the one that you need for your corresponing work. +If you use multiple accounts, you should use the one that you need for your corresponding work. {{< /hint >}} 2. Click on **Sign In**. @@ -78,7 +78,7 @@ an instance remotely is SSH. Using SSH requires a pair of keys - a public key an * Use button **Create KeyPair** * Copy Private Key to Clipboard and save it to the ~/.ssh/id_rsa on your local computer * Confirm using button **Done** - * Now the public key is available down on the page. Use arrow before key name to show public part. Copy this public key to the file ~/.ssh/id_rsa.pub on your local computer + * Now the public key is available down on the page. Use the arrow before the key name to show the public part. Copy this public key to the file ~/.ssh/id_rsa.pub on your local computer  @@ -102,10 +102,10 @@ your virtual machine via SSH from your local terminal. 1. Go to **Project > Network > Security Groups**. Click on **Manage Rules**, for the **default** security group. -2. Click on **Add rule**, choose **SSH** and leave the remaining fields unchanged. +2. Click on **Add rule**, choose **SSH**, and leave the remaining fields unchanged. This will allow you to access your instance. -3. Click on **Add rule**, choose **ALL ICMP** and leave the remaining fields unchanged. +3. Click on **Add rule**, choose **ALL ICMP**, and leave the remaining fields unchanged. This will allow you to `ping` your instance.  @@ -113,8 +113,8 @@ your virtual machine via SSH from your local terminal. {{< hint danger >}} **WARNING** -You have 2 possibilities how to configure security groups policy. One is through CIDR which specifies rules for concrete network range. The second one specifies -rules for members of specified security group, i.e. policy will be applied on instances who belong to selected security group. +You have 2 possibilities for how to configure security groups policy. One is through CIDR which specifies rules for concrete network range. The second one specifies +rules for members of a specified security group, i.e. policy will be applied on instances that belong to the selected security group. {{</hint>}} @@ -126,12 +126,12 @@ For details, refer to [the official documentation](https://docs.openstack.org/ho  -2. Choose name, description, and the number of instances. +2. Choose a name, description, and number of instances. If you are creating more instances, `-%i` will be automatically appended to the name of each instance.  -3. Choose an image from which to boot the instance. Image will be automatically copied to a persistent volume +3. Choose an image from which to boot the instance. The image will be automatically copied to a persistent volume that will remain available even after the instance has been deleted.  @@ -140,7 +140,7 @@ For details, refer to [the official documentation](https://docs.openstack.org/ho  - On following image you can also find details about concrete flavor in highlighted sections + In the following image, you can also find details about concrete flavor in highlighted sections  @@ -165,7 +165,7 @@ For details, refer to [the official documentation](https://docs.openstack.org/ho  See [cloud-init](https://cloud-init.io/) for details. -8. Use button Launch Instance to initialize new instance +8. Use the button Launch Instance to initialize a new instance 9. Wait until instance initialization finishes and [Associate Floating IP](/documentation/cloud/network/#associate-floating-ip). For group project always select the same network as used in [Router gateway](cloud/network/#setup-router-gateway-required-for-group-projects)  @@ -176,24 +176,24 @@ Connect to the instance using **ssh image login**, [id_rsa key registered in Ope {{< hint info >}} **NOTICE** -On Linux and Mac you can use already present SSH client. On Windows there are other possibilities how to connect via SSH. One of the most common is [PuTTy](https://en.wikipedia.org/wiki/PuTTY) SSH client. How to configure and use PuTTy you can visit our tutorial [here](/documentation/cloud/putty/#putty). +On Linux and Mac, you can use the already present SSH client. On Windows, there are other possibilities for how to connect via SSH. One of the most common is [PuTTy](https://en.wikipedia.org/wiki/PuTTY) SSH client. How to configure and use PuTTy you can visit our tutorial [here](/documentation/cloud/putty/#putty). {{</hint>}} -To get **ssh image login** of corresponding image go to **Project > Compute > Images**. There you will have list of all available images and then you have to click on name of one you want. +To get **ssh image login** of corresponding image go to **Project > Compute > Images**. There you will have a list of all available images and then you have to click on the name of the one you want.  And then you must check section **Custom Properties** where you will see a property **default_user** which is **ssh image login**. In picture **ssh image login** for that image is "debian".  -Then you can connect to your instance using following command: +Then you can connect to your instance using the following command: ``` ssh -A -X -i ~/.ssh/id_rsa <ssh_image_login>@<Floating IP> ``` -So it can look like: +So it can look like this: ``` ssh -A -X -i ~/.ssh/id_rsa debian@192.168.18.15 @@ -238,7 +238,7 @@ root file system containing the operating system. It adds flexibility and often attached and detached from instances at any time, their creation and deletion are managed separately from instances. 1. In **Project > Volumes > Volumes**, select **Create Volume**. -2. Provide name, description and size in GBs. If not instructed otherwise, leave all other fields unchanged. +2. Provide name, description, and size in GBs. If not instructed otherwise, leave all other fields unchanged. 3. Click on **Create Volume**. 4. __(optional)__ In **Project > Compute > Instances**, select **Attach Volume** from the **Actions** drop-down menu for the given instance. @@ -247,18 +247,18 @@ attached and detached from instances at any time, their creation and deletion ar For details, refer to [the official documentation](https://docs.openstack.org/horizon/train/user/manage-volumes.html). ## Instance resize -If you find out that you need to select another flavor for your instance, here is a tutorial how to do it: +If you find out that you need to select another flavor for your instance, here is a tutorial on how to do it: Before resizing u should save all your unsaved work and you should also consider making data backup in case if something goes wrong. -First you must go to your instance and then you select **Resize Instance** +First, you must go to your instance and then you select **Resize Instance**  -After that you select what flavor you want instead of your current one and then you confirm your choice +After that, you select what flavor you want instead of your current one and then you confirm your choice  -Then you just have to either confirm or revert selected changes in instance menu +Then you just have to either confirm or revert selected changes in the instance menu  diff --git a/content/cloud/register/index.md b/content/cloud/register/index.md index df6125a..ba60375 100644 --- a/content/cloud/register/index.md +++ b/content/cloud/register/index.md @@ -87,8 +87,8 @@ please visit OpenID Connect User Profile according to your federation: and provide us with information that you see on the page. That is going to be __access control information__. -If you don't have VO/group or you know nothing about it, please contact MUNI Identity Management team -in order to create a new group within the Unified Login service. +If you don't have VO/group or you know nothing about it, please contact the MUNI Identity Management team +to create a new group within the Unified Login service. In the request, describe that you need a group for accessing MetaCentrum Cloud and provide the following information: * Project/group name diff --git a/content/cloud/tools/index.md b/content/cloud/tools/index.md index 276e7c3..14cce48 100644 --- a/content/cloud/tools/index.md +++ b/content/cloud/tools/index.md @@ -6,4 +6,4 @@ weight: 100 disableToc: true --- -On this address [https://gitlab.ics.muni.cz/cloud/cloud-tools](https://gitlab.ics.muni.cz/cloud/cloud-tools) you can find a docker container with all modules required for cloud management, if you are interested in managing your cloud platform via CLI. If so, you can check our guide how to use CLI cloud interface [here](/documentation/cloud/cli/). \ No newline at end of file +On this address [https://gitlab.ics.muni.cz/cloud/cloud-tools](https://gitlab.ics.muni.cz/cloud/cloud-tools) you can find a docker container with all modules required for cloud management if you are interested in managing your cloud platform via CLI. If so, you can check our guide on how to use the CLI cloud interface [here](/documentation/cloud/cli/). diff --git a/content/cloud/windows/index.md b/content/cloud/windows/index.md index 4ba9d79..d788025 100644 --- a/content/cloud/windows/index.md +++ b/content/cloud/windows/index.md @@ -6,18 +6,18 @@ disableToc: true --- -Windows host system allows RDP access allowed for `Administrators` group. By default there are two users in this group: -- Admin - the password for this account is defined by `admin_pass` OpenStack instance metadata, if no value is entered for this key, random password is generated. (could be used for orchestartion). +Windows host system allows RDP access allowed for the `Administrators` group. By default there are two users in this group: +- Admin - the password for this account is defined by `admin_pass` OpenStack instance metadata, if no value is entered for this key, a random password is generated. (could be used for orchestration). - Administrator - the password must be filled after instantiation of the system. -The next step is to create a security group, that will allow access to a port `3389` ([RDP protocol](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol)) for the instance. +The next step is to create a security group, that will allow access to a port `3389` ([RDP protocol](https://en.wikipedia.org/wiki/Remote_Desktop_Protocol)) for the instance. -We recommend disabling those accounts, creating new ones in order to administer Windows instance in any production environment. +We recommend disabling those accounts, creating new ones to administer Windows instances in any production environment. -# Licensing +# Licensing - We are not currently supporting Windows licensing. License responsibility for Windows is entirely up to the user. # Advanced users - You may use all features of [cloudbase-init](https://cloudbase.it/cloudbase-init/) for Windows. -- Windows Server [hardening guidelines](https://security.uconn.edu/server-hardening-standard-windows/). \ No newline at end of file +- Windows Server [hardening guidelines](https://security.uconn.edu/server-hardening-standard-windows/). -- GitLab