Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • cloud/documentation
  • 242618/documentation
  • 469240/documentation
  • LukasD/documentation
  • 35475/documentation
  • 323969/documentation
6 results
Show changes
Showing
with 1167 additions and 0 deletions
content/cloud/advanced-features/images/router3.png

18.6 KiB

content/cloud/advanced-features/images/router4.png

29.6 KiB

content/cloud/advanced-features/images/router5.png

30.1 KiB

content/cloud/advanced-features/images/router6.png

55.9 KiB

---
title: "Advanced Features"
date: 2021-05-18T11:22:35+02:00
draft: false
toc: true
---
The following guide will introduce you to advanced features available in MetaCentrum Cloud.
For basic instructions on how to start a virtual machine instance, see [Quick Start](/cloud/quick-start).
## Orchestration
The OpenStack orchestration service can be used to deploy and manage complex virtual topologies as single entities,
including basic auto-scaling and self-healing.
**This feature is provided as it is and configuration is entirely the responsibility of the user.**
For details, refer to [the official documentation](https://docs.openstack.org/heat-dashboard/train/user/index.html).
## Image upload
We don't support uploading personal images by default. MetaCentrum Cloud images are optimized for running in the cloud and we recommend users
customize them instead of building their own images from scratch. If you need to upload a custom image, please contact user support for appropriate permissions.
Instructions for uploading a custom image:
1. Upload only images in RAW format (not qcow2, vmdk, etc.).
2. Upload is supported only through OpenStack [CLI](/cloud/cli/) with Application Credentials.
3. Each image needs to contain metadata:
```
hw_scsi_model=virtio-scsi
hw_disk_bus=scsi
hw_rng_model=virtio
hw_qemu_guest_agent=yes
os_require_quiesce=yes
```
Following needs to be set up correctly (consult official [documentation](https://docs.openstack.org/glance/train/admin/useful-image-properties.html#image-property-keys-and-values))
or instances won't start:
```
os_type=linux # example
os_distro=ubuntu # example
```
4. The image should contain cloud-init, qemu-guest-agent, and grow-part tools
5. OpenStack will resize an instance after the start. The image shouldn't contain any empty partitions or free space
For a more detailed explanation about CLI work with images, please refer to [https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html](https://docs.openstack.org/python-openstackclient/pike/cli/command-objects/image.html).
## Image visibility
In OpenStack there are 4 possible visibilities of a particular image: **public, private, shared, community**.
You can view these images via **CLI** or in **dashboard**.
In **dashboard** visit section *Images* and then you can search via listed image and/or set searching criteria in search bar. There is a parameter *Visibility* where you can specify visibility of image you are searching for. These visibility parameters are explained below.
![](images/img_vis.png)
### 1. Public images
**Public image** is an image visible and readable to everyone. Only OpenStack admins can modify them.
### 2. Private images
**Private image** is an image visible only to the owner of that image. This is the default setting for all newly created images.
### 3. Shared images
**Shared image** is an image visible only to the owner and possibly certain groups that the owner specified. How to share an image between projects, please read the following [tutorial](#image-sharing-between-projects) below. Image owners are responsible for managing shared images.
### 4. Community images
**Community image** is an image that is accessible to everyone. Image owners are responsible for managing community images.
Community images are visible in the dashboard using `Visibility: Community` query. These images can be listed via CLI command:
```openstack image list --community```.
This is especially beneficial in case of a great number of users who should get access to this image or if you own an old image but some users might still require that image. In that case, you can make set the old image and **Community image** and set the new one as default.
{{< hint danger >}}
**WARNING**
To create or upload this image you must have an <b>image_uploader</b> right.
{{</hint>}}
Creating a new **Community image** can look like this:
```openstack image create --file test-cirros.raw --property hw_scsi_model=virtio-scsi --property hw_disk_bus=scsi --property hw_rng_model=virtio --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes --property os_type=linux --community test-cirros```
Note that references to existing community images should use `<image-id>` instead of `<image-name>`.
See [image visibility design upstream document](https://wiki.openstack.org/wiki/Glance-v2-community-image-visibility-design) for more details.
## Image sharing between projects
There are two ways sharing an OpenStack Glance image among projects, using `shared` or `community` image visibility.
### Shared image approach
Image sharing allows you to share your image between different projects and then it is possible to launch instances from that image in those projects with other collaborators etc. As mentioned in a section about CLI, you will need to use your OpenStack credentials from ```openrc``` or ```cloud.yaml``` file.
Then to share an image you need to know its ID, which you can find with the command:
```
openstack image show <name_of_image>
```
where ```name_of_image``` is the name of the image you want to share.
After that, you will also have to know the ID of the project you want to share your image with. If you do not know the ID of that project you can use the following command, which can help you find it:
```
openstack project list | grep <name_of_other_project>
```
where ```<name_of_project>``` is the name of the other project. Its ID will show up in the first column.
Now all with the necessary IDs, you can share your image. First, you need to set an attribute of the image to `shared` by the following command:
```
openstack image set --shared <image_ID>
```
And now you can share it with your project by typing this command:
```
openstack image add project <image_ID> <ID_of_other_project>
```
where ```ID_of_other_project``` is the ID of the project you want to share the image with.
Now you can check if the user of the other project accepted your image by command:
```
openstack image member list <image_ID>
```
If the other user did not accept your image yet, the status column will contain the value: ```pending```.
**Accepting shared image**
To accept a shared image you need to know ```<image_ID>``` of the image that the other person wants to share with you. To accept shared image to your project
you need to use the following command:
```
openstack image set --accept <image_ID>
```
You can then verify that by listing your images:
```
openstack image list | grep <image_ID>
```
**Unshare shared image**
As an owner of the shared image, you can check all projects that have access to the shared image by the following command:
```
openstack image member list <image_ID>
```
When you find ```<ID_project_to_unshare>``` of project, you can cancel the access of that project to the shared image by command:
```
openstack image remove project <image ID> <ID_project_to_unshare>
```
### Community image approach
This approach is very simple:
1. Mark an image as `community` (`openstack image set --shared <image_ID>`)
1. Now everyone can use the community image, but there are two limitations:
* to list community images you **have to** specify visibility (in UI: `Visibility: Community`, cli: `openstack image list --community`)
* to use any community image you **have to** use `<image_ID>` (references via `<image_name>` result in NOT FOUND)
## Add SWAP file to instance
By default VMs after creation do not have SWAP partition. If you need to add a SWAP file to your system you can download and run [script](https://gitlab.ics.muni.cz/cloud/cloud-tools/-/blob/master/swap/swap.sh) that create a SWAP file on your VM.
## Local SSDs
Default MetaCentrum Cloud storage is implemented via the CEPH storage cluster deployed on top of HDDs. This configuration should be sufficient for most cases.
For instances, that require high throughput and IOPS, it is possible to utilize hypervisor local SSDs. Requirements for instances on hypervisor local SSD:
* instances can be deployed only via API (CLI, Ansible, Terraform ...), instances deployed via web GUI (Horizon) will always use CEPH for its storage
* supported only by flavors with ssd-ephem suffix (e.g. hpc.4core-16ram-ssd-ephem)
* instances can be rebooted without prior notice or you can be required to delete them
* you can request them when asking for a new project, or an existing project on cloud@metacentrum.cz
## Affinity policy
Affinity policy is tool users can use to deploy nodes of a cluster on the same physical machine or if they should be spread among other physical machines. This can be beneficial if you need fast communication between nodes or you need them to be spread due to load-balancing or high availability etc. For more info please refer to [https://docs.openstack.org/senlin/train/scenarios/affinity.html](https://docs.openstack.org/senlin/train/scenarios/affinity.html).
## Cloud orchestration tools
### Terraform
Terraform is the best orchestration tool for creating and managing cloud infrastructure. It is capable of greatly simplifying cloud operations. It gives you an option if something goes wrong you can easily rebuild your cloud infrastructure.
It manages resources like virtual machines, DNS records, etc.
It is managed through configuration templates containing info about its tasks and resources. They are saved as *.tf files. If configuration changes, Terraform can detect it and create additional operations to apply those changes.
Here is an example how this configuration file can look like:
```
variable "image" {
default = "Debian 10"
}
variable "flavor" {
default = "standard.small"
}
variable "ssh_key_file" {
default = "~/.ssh/id_rsa"
}
```
You can use OpenStack Provider which is a tool for managing resources OpenStack supports via Terraform. Terraform has an advantage over Heat because it can be used also in other architectures, not only in OpenStack
For more detail please refer to [https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs](https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs) and [https://www.terraform.io/intro/index.html](https://www.terraform.io/intro/index.html).
### Heat
Heat is another orchestration tool used for managing cloud resources. This one is OpenStack exclusive so you can't use it anywhere else. Just like Terraform it is capable of simplifying orchestration operations in your cloud infrastructure.
It also uses configuration templates for the specification of information about resources and tasks. You can manage resources like servers, floating IPs, volumes, security groups, etc. via Heat.
Here is an example of a Heat configuration template in form of a *.yaml file:
```
heat_template_version: 2021-04-06
description: Test template
resources:
my_instance:
type: OS::Nova::Server
properties:
key_name: id_rsa
image: Debian10_image
flavor: standard.small
```
You can find more information here [https://wiki.openstack.org/wiki/Heat](https://wiki.openstack.org/wiki/Heat).
## Object storage management
OpenStack supports object storage based on [OpenStack Swift](https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html). Creation of object storage container (database) is done by clicking on `+Container` on [Object storage containers page](https://dashboard.cloud.muni.cz/project/containers).
Every object typically contains data along with metadata and a unique global identifier to access it. OpenStack allows you to upload your files via HTTPS protocol. There are two ways of managing created object storage container:
1. Use OpenStack component [Swift](https://docs.openstack.org/swift/train/admin/index.html)
2. Use [S3 API](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html)
In both cases, you will need application credentials to be able to manage your data.
### Swift credentials
The easiest way to generate **Swift** storage credentials is through [MetaCentrum cloud dashboard](https://dashboard.cloud.muni.cz). You can generate application credentials as described [here](/cloud/cli/#getting-credentials). You must have role **heat_stack_owner**.
### S3 credentials
If you want to use **S3 API** you will need to generate ec2 credentials for access. Note that to generate ec2 credentials you will also need credentials containing the role of **heat_stack_owner**. Once you sourced your credentials for CLI you can generate ec2 credentials by the following command:
```
$ openstack ec2 credentials create
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| access | 896**************************651 |
| project_id | f0c**************************508 |
| secret | 336**************************49c |
...
| user_id | e65***********************************************************6a |
+------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
Then you may use one of the s3 clients (minio client mc, s3cmd, ...)
Running minio client against created object storage container is very easy:
```
$ MC config host add swift-s3 https://object-store.cloud.muni.cz 896**************************651 336**************************49c --api S3v2
Added `swift-s3` successfully.
$ MC ls swift-s3
[2021-04-19 15:13:45 CEST] 0B freznicek-test/
```
s3cmd client requires a configuration file that looks like this:
In this case please open your file with credentials that will look like this:
```
[default]
access_key = 896**************************651
secret_key = 336**************************49c
host_base = object-store.cloud.muni.cz
host_bucket = object-store.cloud.muni.cz
use_https = True
```
For more info please refer to [https://docs.openstack.org/swift/latest/s3_compat.html](https://docs.openstack.org/swift/latest/s3_compat.html) and [https://docs.openstack.org/train/config-reference/object-storage/configure-s3.html](https://docs.openstack.org/train/config-reference/object-storage/configure-s3.html).
---
title: "Best practices"
date: 2021-05-18T11:22:35+02:00
draft: false
weight: -90
---
Following article summarizes effective approaches how to use our cloud.
## How many public IP addresses I need?
There are [two pools of public IPv4 addresses available](/cloud/network/#group-project).
**Unfortunately the amount of available public IPv4 addresses is limited.** Read the details on [how to set-up your project networking](/cloud/network/).
In most cases even when you build huge cloud infrastructure you should be able to access it via few (up-to two) public IP addresses.
![](/cloud/best-practices/images/accessing-vms-through-jump-host-6-mod.png)
There are the following project VMs in the cloud architecture:
| VM name | VM operating system | VM IP addresses | VM flavor type | VM description |
| :--- | :---: | :-----------: | :------: | :------: |
| freznicek-cos8 | centos-8-x86_64 | 172.16.0.54, 147.251.21.72 (public) | standard.medium | jump host |
| freznicek-ubu | ubuntu-focal-x86_64 | 172.16.1.67 | standard.medium | internal VM |
| freznicek-deb10 | debian-10-x86_64 | 172.16.0.158 | standard.medium | internal VM |
| ... | ... | ... | ... | internal VM |
### Setting-up the VPN tunnel via encrypted SSH with [sshuttle](https://github.com/sshuttle/sshuttle)
```sh
# terminal A
# Launch tunnel through jump-host VM
# Install sshuttle
if grep -qE 'ID_LIKE=.*debian' /etc/os-release; then
# on debian like OS
sudo apt-get update
sudo apt-get -y install sshuttle
elif grep -qE 'ID_LIKE=.*rhel' /etc/os-release; then
# on RHEL like systems
sudo yum -y install sshuttle
fi
# Establish the SSH tunnel (and stay connected) where
# 147.251.21.72 is IP address of example jump-host
# 172.16.0.0/22 is IP subnet where example cloud resources are internally available
sshuttle -r centos@147.251.21.72 172.16.0.0/22
```
### Accessing (hidden) project VMs through the VPN tunnel
```sh
# terminal B
# Access all VMs allocated in the project in 172.16.0.0/22 subnet (a C5 instance shown on picture)
$ ssh debian@172.16.0.158 uname -a
Linux freznicek-deb10 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
# Access is not limited to any protocol, you may access web servers as well as databases...
$ curl 172.16.1.67:8080
Hello, world, cnt=1, hostname=freznicek-ubu
```
## How to store project data
Every project generates an amount of data that needs to be stored. There are options (sorted by preference):
* as objects or files in a [S3 compatible storage](https://en.wikipedia.org/wiki/Amazon_S3)
* S3 compatible storage may be requested as separate cloud storage resource ([OpenStack Swift storage + S3 API](https://docs.openstack.org/swift/latest/s3_compat.html))
* S3 storage may be also easily launched on one of the project VMs ([minio server](https://github.com/minio/minio))
* as files on
* separate (ceph) volume
* virtual machine disk volume (i.e. no explicit volume for the project data)
* as objects or files in the [OpenShift Swift storage](https://docs.openstack.org/swift/train/admin/objectstorage-intro.html)
MetaCentrum Cloud stores raw data:
* in ceph cloud storage on rotation disks (SSDs will be available soon)
* in hypervisor (bare metal) disks (rotational, SSD, SSD NVMe)
We encourage all users to backup important data themselves while we work on a cloud-native backup solution.
## How to compute (scientific) tasks
Your application may be:
* `A.` single instance application, running on one of cloud computation resources
* `B.` multi-instance application with messaging support (MPI), where all instances run on the same cloud computation resource
* `C.` true distributed computing, where the application runs in jobs scheduled to multiple cloud computation resources
Applications running in a single cloud resource (`A.` and `B.`) are a direct match for MetaCentrum Cloud OpenStack. Distributed applications (`C.`) are best handled by [MetaCentrum PBS system](https://metavo.metacentrum.cz/cs/state/personal).
## How to create and maintain cloud resources
Your project is computed within the MetaCentrum Cloud Openstack project where you can claim MetaCentrum Cloud Openstack resources (for example virtual machine, floating IP, ...). There are multiple ways how to set up the MetaCentrum Cloud Openstack resources:
* manually using [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz) (Openstack Horizon)
* automated approaches
* [terraform](https://registry.terraform.io/providers/terraform-provider-openstack/openstack/latest/docs) ([example 1](https://github.com/terraform-provider-openstack/terraform-provider-openstack/tree/main/examples/app-with-networking), [example 2](https://gitlab.ics.muni.cz/cloud/terrafrom-demo))
* ansible
* [openstack heat](https://docs.openstack.org/heat/train/template_guide/hello_world.html)
If your project infrastructure (MetaCentrum Cloud Openstack resources) within the cloud is static you may select a manual approach with [MetaCentrum Cloud Openstack Dashboard UI](https://dashboard.cloud.muni.cz). There are projects which need to allocate MetaCentrum Cloud Openstack resources dynamically, in such cases we strongly encourage automation even at this stage.
## How to transfer your work to cloud resources and make it up-to-date
There are several options how to transfer the project to cloud resources:
* manually with `scp`
* automatically with `ansible` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-native/tasks/init.yml#L29-47))
* automatically with `terraform`
* indirectly in a project container (example: [releasing a project container image](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/.gitlab-ci.yml#L17-80), [pulling and running a project image](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-container/tasks/deploy.yml#L16-28))
* indirectly in OpenStack (glance) image (you need to obtain image-uploader role)
* OpenStack Glance images may be [public, private, community or shared](//cloud/gui/#image-visibility).
### ssh to cloud VM resources and manual update
In this scenario, you log into your cloud VM and perform all needed actions manually. This approach does not scale well, is not effective enough as different users may configure cloud VM resources in different ways resulting sometimes in different resource behavior.
### automated work transfer and synchronization with docker (or podman)
There are automation tools that may help you to ease your cloud usage:
* ansible and/or terraform
* container runtime engine (docker, podman, ...)
Ansible is a cloud automation tool that helps you with:
* keeping your VM updated
* automatically migrating your applications or data to/from cloud VM
Container runtime engine helps you to put yours into a container stored in a container registry.
Putting your work into a container has several advantages:
* share the code including binaries in a consistent environment (even across different Operating Systems)
* avoids application [re]compilation in the cloud
* your application running in the container is isolated from the host's container runtime so
* you may run multiple instances easily
* you may easily compare different versions at once without collisions
* you become ready for future kubernetes cloud
As a container registry we suggest either:
* public quay.io ([you need to register for free first](https://quay.io/signin/))
* private Masaryk University [registry.gitlab.ics.muni.cz:443](registry.gitlab.ics.muni.cz:443)
An example of such an approach is demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi).
## How to receive data from project's experiments to your workstation
It certainly depends on how your data are stored, the options are:
* files transfer
* manual file transfer with `scp` (and possibly `sshuttle`)
* automated file transfer with `scp` + `ansible` (and possibly `sshuttle`), demonstrated in [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/ansible/roles/cloud-project-container/tasks/download.yml)
* database data/objects transfer
* data stored in the S3 compatible database may be easily received via [minio client application MC](https://docs.min.io/docs/minio-client-complete-guide)
* date stored in [OpenStack Swift python client `swift`](https://docs.openstack.org/python-swiftclient/train/swiftclient.html)
## How to make your application in the cloud highly available
Let's assume your application is running in multiple instances in the cloud already.
To make your application highly available (HA) you need to
* run the application instances on different cloud resources
* use MetaCentrum Cloud load-balancer component (based on [OpenStack Octavia](https://docs.openstack.org/octavia/train/reference/introduction.html#octavia-terminology)) which is going to balance traffic to one of the app's instances.
Your application surely needs a Fully Qualified Domain Name (FQDN) address to become popular. Setting FQDN is done on the public floating IP linked to the load-balancer.
## Cloud project example and workflow recommendations
This chapter summarizes effective cloud workflows on the (example) [`cloud-estimate-pi` project](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi).
The project recommendations are:
1. Project files should be versioned in [a VCS](https://en.wikipedia.org/wiki/Version_control) (git)
1. The project repository should
* contain the documentation
* follow standard directory structure `src/`, `conf/`, `kubernetes/`
* include CI/CD process pipeline ([`.gitlab-ci.yml`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/.gitlab-ci.yml), ...)
* contain deployment manifests or scripts (kubernetes manifests or declarative deployment files (ansible, terraform, puppet, ...))
1. The project release should be automated and triggered by pushing a [semver v2 compatible](https://semver.org/) [tag](https://dev.to/neshaz/a-tutorial-for-tagging-releases-in-git-147e)
1. The project should support execution in a container as there are significant benefits: ([`Dockerfile`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/Dockerfile))
* consistent environment (surrounding the application)
* application portability across all Operating Systems
* application isolation from host Operating System
* multiple ways how to execute the application (container cloud support advanced container life-cycle management)
1. The project should have a changelog (either manually written or generated) (for instance [`CHANGELOG.md`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/CHANGELOG.md))
We recommend every project defines cloud usage workflow which may consist of:
1. Cloud resource initialization, performing
* cloud resource update to the latest state
* install necessary tools for project compilation and execution
* test container infrastructure (if it is used)
* transfer project files if need to be compiled
1. Project deployment (and execution) in the cloud, consisting of
* compilation of the project in the cloud (if native execution selected)
* execution of the project application[s] in the cloud
* storing or collecting project data and logs
1. Download project data from cloud to workstation (for further analysis or troubleshooting)
* download of project data from cloud to user's workstation
1. Cloud resource destroy
## Road-map to effective cloud usage
Project automation is usually done in CI/CD pipelines. Read [Gitlab CI/CD article](https://docs.gitlab.com/ee/ci/introduction/) for more details.
![](https://docs.gitlab.com/ee/ci/introduction/img/gitlab_workflow_example_extended_v12_3.png)
The following table shows the different cloud usage phases:
| Cloud usage phase | Cloud resource management | Project packaging | Project deployment | Project execution | Project data synchronization | Project troubleshooting |
| :--- | :---: | :-----------: | :------: | :------------: | :------------: | :------------: |
| ineffective manual approach | manual (`ssh`) | manually built binaries (versioned?) | manual deployment (scp) | manual execution (ssh) | manual transfers (scp) | manual investigation on VM (scp) |
| ... | ... | ... | ... | ... | ... | ... |
| [continuous delivery](https://docs.gitlab.com/ee/ci/introduction/#continuous-delivery) (automated, but deploy manual) | semi-automated (GUI + `ansible` executed manually) | container ([semver](https://semver.org) versioned) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` executed manually) | semi-automated (`ansible` and `ssh` manually) |
| [continuous deployment](https://docs.gitlab.com/ee/ci/introduction/#continuous-deployment) (fully-automated) | automated (`terraform` and/or `ansible` in CI/CD) | container ([semver](https://semver.org) versioned) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | automated (`ansible` in CI/CD) | semi-automated (`ansible` in CI/CD and `ssh` manually) |
## How to convert the legacy application into a container for a cloud?
Containerization of applications is one of the best practices when you want to share your application and execute it in the cloud. Read about [the benefits](https://cloud.google.com/containers).
The application containerization process consists of the following steps:
* Select a container registry (where container images with your applications are stored)
* Publicly available registries like [quay.io](https://quay.io) are best as everyone may receive your application even without credentials
* Your project applications should be containerized via creating a `Dockerfile` ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/master/Dockerfile))
* Follow [docker guide](https://www.freecodecamp.org/news/a-beginners-guide-to-docker-how-to-create-your-first-docker-application-cc03de9b639f/) if you are not familiar with `Dockerfile` syntax
* If your project is huge and contains multiple applications, then it is recommended to divide them into few parts by topic each part building a separate container.
* Project CI/CD jobs should build applications, create container image[s] and finally release (push) container image[s] with applications to the container registry
* Everyone is then able to use your applications (packaged in a container image) regardless of which Operating System (OS) he or she uses. Container engine (docker, podman, ...) is available for all mainstream OSes.
* Cloud resources are then told to pull and run your container image[s]. ([example](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi/-/blob/de673766b832c48142c6ad1be73f5bce046b02a2/ansible/roles/cloud-project-container/tasks/deploy.yml#L11-28))
Learn best-practices on our cloud example [project `cloud-estimate-pi`](https://gitlab.ics.muni.cz/cloud/cloud-estimate-pi).
content/cloud/cli/images/app_creds_1.png

109 KiB

content/cloud/cli/images/app_creds_2.png

60.1 KiB

---
title: "Command Line Interface"
date: 2021-05-18T11:22:35+02:00
draft: false
---
In order to have access to OpenStack's API, you have to use so-called OpenStack Application Credentials. In short,
it is a form of token-based authentication providing easy and secure access without the use of passwords.
## Getting Credentials
1. In **Identity &gt; Application Credentials**, click on **Create Application Credential**.
2. Choose name, description and expiration date & time.
![](images/app_creds_1.png)
{{< hint info >}}
**Notice:**
Do NOT select specific roles, unless directed otherwise by user support.
{{< /hint >}}
{{< hint info >}}
**Notice:**
If you decide to select specific roles, you should always include at least the **member** role.
If you are planning to use the orchestration API, add the **heat_stack_owner** role as well and
check **Unrestricted**.
{{< /hint >}}
3. Download provided configuration files for the OpenStack CLI client.
![](images/app_creds_2.png)
## Setting Up
1. [Install](https://pypi.org/project/python-openstackclient/) and
[configure](https://docs.openstack.org/python-openstackclient/train/configuration/index.html)
OpenStack CLI client.
{{< hint danger >}}
**WARNING:**
Add the following line to the **openrc** file:
`export OS_VOLUME_API_VERSION=3`
Add the following line to the **clouds.yaml** file:
`volume_api_version: 3`
{{< /hint >}}
2. Follow the official [Launch instances](https://docs.openstack.org/nova/train/user/launch-instances.html) guide.
---
## Creating a key-pair
You can either get your private key from the dashboard or you can use **ssh-keygen** command to create a new private key:
```
ssh-keygen -b 4096
```
then you will be asked to specify the output file and passphrase for your key.
1. Assuming your ssh public key is stored in `~/.ssh/id_rsa.pub`
```
openstack keypair create --public-key ~/.ssh/id_rsa.pub my-key1
```
## Create a security group
1. Create:
```
openstack security group create my-security-group
```
2. Add rules to your security group:
```
openstack security group rule create --description "Permit SSH" --remote-ip 0.0.0.0/0 --protocol tcp --dst-port 22 --ingress my-security-group
openstack security group rule create --description "Permit ICMP (any)" --remote-ip 0.0.0.0/0 --protocol icmp --icmp-type -1 --ingress my-security-group
```
3. Verify:
```
openstack security group show my-security-group
```
## Create a network
1. Create network + subnet (from an auto-allocated pool)
```
openstack network create my-net1
openstack subnet create --network my-net1 --subnet-pool private-192-168 my-sub1
```
##Router management
### Router Creation
2. Create router:
```
openstack router create my-router1
```
The current router has no ports, which makes it pretty useless, we need to create at least 2 interfaces (external and internal)
3. Set external network for the router (let us say public-muni-147-251-124), and the external port will be created automatically:
```
openstack router set --external-gateway public-muni-147-251-124 my-router1
```
4. Check which IP address is set as gateway for our subnet (default: first address of the subnet):
```
GW_IP=$(openstack subnet show my-sub1 -c gateway_ip -f value)
```
5. Create an internal port for the router (gateway for the network my-net1):
```
openstack port create --network my-net1 --disable-port-security --fixed-ip ip-address=$GW_IP my-net1-port1-gw
```
6. Add port to the router:
```
openstack router add port my-router1 my-net1-port1-gw
```
### Clear gateway
1. Find your router:
```
$ openstack router list
+--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+
| ID | Name | Status | State | Distributed | HA | Project |
+--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+
| 0bd0374d-b62e-429a-8573-3e8527399b68 | auto_allocated_router | ACTIVE | UP | None | None | f0c339b86ddb4699b6eab7acee8d4508 |
+--------------------------------------+-----------------------+--------+-------+-------------+------+----------------------------------+
```
2. Verify:
```
$ openstack router show 0bd0374d-b62e-429a-8573-3e8527399b68
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | None |
| availability_zones | None |
| created_at | 2019-06-06T04:47:15Z |
| description | None |
| distributed | None |
| external_gateway_info | {"network_id": "8d5e18ab-5d43-4fb5-83e9-eb581c4d5365", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "41e0cd1c-5ab8-465f-8605-2e7d6a3fe5b4", "ip_address": "147.251.124.177"}]} |
| flavor_id | None |
| ha | None |
| id | 0bd0374d-b62e-429a-8573-3e8527399b68 |
| interfaces_info | [{"port_id": "92c3f6fe-afa8-47c6-a1a6-f6a1b3c54f72", "ip_address": "192.168.8.193", "subnet_id": "e903d5b9-ac90-4ca8-be2c-c509a0153982"}] |
| location | Munch({'cloud': '', 'region_name': 'brno1', 'zone': None, 'project': Munch({'id': 'f0c339b86ddb4699b6eab7acee8d4508', 'name': None, 'domain_id': None, 'domain_name': None})}) |
| name | auto_allocated_router |
| project_id | f0c339b86ddb4699b6eab7acee8d4508 |
| revision_number | 24 |
| routes | |
| status | ACTIVE |
| tags | |
| updated_at | 2019-06-06T06:34:34Z |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
3. Unset gateway (by ID of the router):
```
$ openstack router unset --external-gateway 0bd0374d-b62e-429a-8573-3e8527399b68
```
### Set Gateway
1. Choose a new external network:
```
$ openstack network list
+--------------------------------------+--------------------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------------------------+--------------------------------------+
| 410e1b3a-1971-446b-b835-bf503917680d | public-cesnet-78-128-251 | 937106e2-3d51-43cc-83b6-c779465011e5 |
| 8d5e18ab-5d43-4fb5-83e9-eb581c4d5365 | public-muni-147-251-124 | 41e0cd1c-5ab8-465f-8605-2e7d6a3fe5b4 |
| c708270d-0545-4be2-9b8f-84cf75ce09cf | auto_allocated_network | e903d5b9-ac90-4ca8-be2c-c509a0153982 |
| d896044f-90eb-45ee-8cb1-86bf8cb3f9fe | private-muni-10-16-116 | 3d325abf-f9f8-4790-988f-9cd3d1dea4f3 |
+--------------------------------------+--------------------------+--------------------------------------+
```
2. Set the new external network for the router
```
$ openstack router set --external-gateway public-cesnet-78-128-251 0bd0374d-b62e-429a-8573-3e8527399b68
```
## Create volume
{{< hint danger >}}
**WARNING**
Skipping this section can lead to unreversible loss of data
{{</hint>}}
Volumes are created automatically when creating an instance in GUI, but we need to create them manually in the case of CLI
1. Create bootable volume from image(e.g. centos):
```
openstack volume create --image "centos-7-1809-x86_64" --size 40 my_vol1
```
## Create server
1. Create the instance:
```
openstack server create --flavor "standard.small" --volume my_vol1 \
--key-name my-key1 --security-group my-security-group --network my-net1 my-server1
```
## Floating IP address management
### Creating and assigning new FIP
1. Allocate new Floating IPs:
```
$ openstack floating ip create public-cesnet-78-128-251
+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2019-06-06T06:56:51Z |
| description | |
| dns_domain | None |
| dns_name | None |
| fixed_ip_address | None |
| floating_ip_address | 78.128.251.27 |
| floating_network_id | 410e1b3a-1971-446b-b835-bf503917680d |
| id | d054b6b3-bbd3-485d-a46b-b80682df8fc8 |
| location | Munch({'cloud': '', 'region_name': 'brno1', 'zone': None, 'project': Munch({'id': 'f0c339b86ddb4699b6eab7acee8d4508', 'name': None, 'domain_id': None, 'domain_name': None})}) |
| name | 78.128.251.27 |
| port_details | None |
| port_id | None |
| project_id | f0c339b86ddb4699b6eab7acee8d4508 |
| qos_policy_id | None |
| revision_number | 0 |
| router_id | None |
| status | DOWN |
| subnet_id | None |
| tags | [] |
| updated_at | 2019-06-06T06:56:51Z |
+---------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
```
2. And assign it to your server:
```
$ openstack server add floating ip net-test1 78.128.251.27
```
### Remove existing floating IP
1. List your servers:
```
$ openstack server list
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
| 1a0d4624-5294-425a-af37-a83eb0640e1c | net-test1 | ACTIVE | auto_allocated_network=192.168.8.196, 147.251.124.248 | | standard.small |
+--------------------------------------+-----------+--------+-------------------------------------------------------+-------+----------------+
```
2. remove floating IPs:
```
$ openstack server remove floating ip net-test 147.251.124.248
$ openstack floating ip delete 147.251.124.248
```
## Cloud tools
You can inspect cloud tools [here](/cloud/tools)
## Full Reference
See [OpenStack CLI Documentation](https://docs.openstack.org/python-openstackclient/train/).
---
title: "Contribute"
date: 2021-05-18T11:22:35+02:00
draft: false
weight: 110
---
We use the open-source [Hugo](https://gohugo.io/) project to generate the documentation.
## Requirements
[Install](https://gohugo.io/getting-started/installing/) Hugo
## Work-flow Overview
1. Fork & clone repository
2. Create a branch
3. Commit your changes
4. Push to the branch
5. Create a Merge Request with the content of your branch
### Fork Repository
See [GitLab @ ICS MU](https://gitlab.ics.muni.cz/cloud/documentation/forks/new) for details. This will create your own clone of our repository where you will be able to make changes. Once you are happy with your changes, use GitLab to submit them to our original repository.
### Clone Repository
```bash
# after creating your own copy of the repository on GitLab
git clone git@gitlab.ics.muni.cz:${GITLAB_USER}/documentation.git
```
### Create New Branch
```bash
# in `documentation`
git checkout -b my_change
```
### Make Changes & Run Local Server
```bash
# in `documentation`
hugo --config config-dev.toml serve
```
> Edits will be shown live in your browser window, no need to restart the server.
### Commit and Push Changes
```bash
git commit -am "My updates"
git push origin my_change
```
### Submit Changes
Create a *Merge Request* via [GitLab @ ICS MU](https://gitlab.ics.muni.cz/cloud/documentation/merge_requests/new).
## Tips
### Disable table of content
The table of content is generated automatically for every page. To hide the table of contents, put this line to the page's header:
```
disableToc: true
```
### Hide from the menu
To hide a page from the menu, add this line to the page's header:
```
GeekdocHidden: true
```
### Hints
To show "hint bar" similar to this one:
{{< hint info >}}
some text
{{</hint>}}
you can use *short codes*.
Please see [theme documentation](https://geekdocs.de/shortcodes/hints/).
---
title: "Frequently Asked Questions"
date: 2021-05-18T11:22:35+02:00
draft: false
weight: 90
---
## Where I can find how to use MetaCentrum Cloud effectively?
Read our [cloud best-practice tips](/cloud/register).
## What to expect from the cloud and cloud computing
[Migration of Legacy Systems to Cloud Computing](https://www.researchgate.net/publication/280154501_Migration_of_Legacy_Systems_to_Cloud_Computing) article gives an overview of what to expect when joining a cloud with a personal legacy application.
### What are the cloud computing benefits?
The most visible [cloud computing](https://en.wikipedia.org/wiki/Cloud_computing) benefits are:
* cost savings
* online access to the cloud resources for everyone authorized
* cloud project scalability (elasticity)
* online cloud resource management and improved sustainability
* security and privacy improvements
* encouraged cloud project agility
## How do I register?
Follow instructions for registering in [MetaCentrum Cloud](/cloud/register).
## Where do I report a problem?
First, try searching the documentation for an answer to your problem. If that does not yield results, open a
ticket with [cloud@metacentrum.cz](mailto:cloud@metacentrum.cz). When contacting user support, always
include your *username* (upper right corner of the web interface) and *domain* with
active *project* (upper left corner of the web interface) as well as a description of
your problem and/or an error message if available.
## What networks I can use to access my instances?
Personal projects can allocate floating IPs from *public-cesnet-78-128-250-PERSONAL*. Routing is preset for this address pool.
Group projects can currently allocate floating IPs from networks ending with *GROUP* suffix as well as *private-muni-10-16-116*.
Furthermore, IP addresses allocated from *public-muni-147-251-124-GROUP* and *public-muni-147-251-255-GROUP* are released daily, so we encourage
using only *public-cesnet-78-128-251-GROUP* and *private-muni-10-16-116* for group projects.
Follow instructions at [changing the external network](/cloud/network) in order to change your public network.
## Issues with network MTU (Docker, kubernetes, custom network overlays)
OpenStack compute server instances should use 1442 bytes MTU (maximum transmission unit) instead of the standard 1500 bytes MTU. The instance itself can set up the correct MTU with its counterpart via Path MTU Discovery. Docker needs MTU set up explicitly. Refer documentation for setting up 1442 MTU in [Docker](https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/) or [Kubernetes](https://docs.projectcalico.org/v3.5/usage/configuration/mtu) or change the configuration with the steps below.
### Changes in Docker daemon
```sh
# edit docker configuration
sudo vi /etc/docker/daemon.json
# MTU 1442 or lower
{
"mtu": 1442
}
# then restart docker
sudo systemctl restart docker
```
### MTU detection
You can use following bash function to detect end-to-end maximum packet size without packet fragmentation.
```sh
# detect_mtu <host>
# measure end-to-end MTU
function detect_mtu() {
local endpoint_host="$1"
for i_mtu in `seq 1200 20 1500` `seq 1500 50 9000`; do
if ping -M do -s $(( $i_mtu - 28 )) -c 5 "${endpoint_host}" >/dev/null; then
echo "Packets of size ${i_mtu} work as expected"
else
echo "Packets of size ${i_mtu} are blocked by MTU limit on the path to destination host ${endpoint_host}!"
break
fi
done
}
# execute
detect_mtu www.nic.cz
```
## Issues with proxy in private networks
OpenStack instances can either use public or private networks. If you are using a private network and you need to access the internet for updates etc.,
you can use muni proxy server *proxy.ics.muni.cz*. This server only supports HTTP protocol, not HTTPS. To configure it you must also consider what applications
will be using it because they can have their configuration files, where this information must be set. If so, you must find the particular setting and set up there
mentioned proxy server with port 3128. Most applications use the following setting, which can be done by editing file `/etc/environment` where you need to add a line
`http_proxy="http://proxy.ics.muni.cz:3128/"`. And then you must either restart your machine or use the command `source /etc/environment`.
## How many floating IPs does my group project need?
One floating IP per project should generally suffice. All OpenStack instances are deployed on top of internal OpenStack networks. These internal networks are not by default accessible from outside of OpenStack, but instances on top of the same internal network can communicate with each other.
To access the internet from an instance, or access an instance from the internet, you could allocate floating public IP per instance. Since there are not many public IP addresses available and assigning public IP to every instance is not a security best practice, both in public and private clouds these two concepts are used:
* **internet access is provided by virtual router** - all new OpenStack projects are created with *group-project-network* internal network connected to a virtual router with public IP as a gateway. Every instance created with *group-project-network* can access the internet through NAT provided by its router by default.
* **accessing the instances:**
* **I need to access instances by myself** - best practice for accessing your instances is creating one server with floating IP called [jump host](https://en.wikipedia.org/wiki/Jump_server) and then access all other instances through this host. Simple setup:
1. Create an instance with any Linux.
2. Associate floating IP with this instance.
3. Install [sshuttle](https://github.com/sshuttle/sshuttle) on your client.
4. `sshuttle -r root@jump_host_fip 192.168.0.1/24`. All your traffic to the internal OpenStack network *192.168.0.1/24* is now tunneled through the jump host.
* **I need to serve content (e.g. web service) to other users** - public and private clouds provide LBaaS (Load-Balancer-as-a-Service) service, which proxies users traffic to instances. MetaCentrum Cloud provides this service in experimental mode - [documentation](/cloud/gui#lbaas)
In case, that these options are not suitable for your use case, you can still request multiple floating IPs.
## I can't log into OpenStack, how is that possible?
The most common reason why you can't log into your OpenStack account is that your membership in Metacentrum has expired. To extend your membership in Metacentrum,
you can visit [https://metavo.metacentrum.cz/en/myaccount/prodlouzeni](https://metavo.metacentrum.cz/en/myaccount/prodlouzeni).
## Backups
All the data is protected against disk failures. We are not responsible for any data loss that may occur. For now, we do not provide any means for offsite backups.
What can I do?
- Use OpenStack Snapshots for local backup.
- Use backup software like Borg or Restic to create an offsite incremental backup.
- Use backup/data storage services provided by MUNI or CESNET (e. g. https://it.muni.cz/sluzby/zalohovani-bacula ).
## I can't access my cloud VMs. MetaCentrum OpenStack network security protection
Access to the MetaCentrum cloud is protected by [CSIRT-MU](https://csirt.muni.cz/?lang=en) and [CSIRT-CESNET](https://csirt.cesnet.cz/en/index) security teams.
Some interactions with allocated cloud resources may cause cloud access blockage. This is caused by the fact, that legal SSH access to a new virtual machine (VM) which is being allocated is very similar to a (SSH) brute-force attack.
A newly created VM will respond to SSH connection attempts in different ways as it moves through the setup stages:
* A) VM is booting and network is being established. At this stage, there is no functional connection point, and connection attempts will timeout.
* B) SSH connection is being set. At the start of its lifetime, a VM runs the cloud-init process, which enables SSH authentication with the user's SSH key. A connection is refused, because it can't verify the user.
* C) Connection is finally successfull. All setup processes are finished.
When a (ssh) brute-force attack is attempted, scenario is very similar. Repeated unsuccessful (unauthorized) connections to the VM are made (resulting in connection reset or timeout). Once the attacker passes the right credentials, gets connected and logged.
Therefore, when security systems discover such suspicious series of unsuccessfull connections followed by successful one, they likely block Your IP address to the allocated cloud VMs.
### Best practices for accessing cloud resources without getting blocked
The key practices helping to avoid source IP address blockage are:
* connect to cloud infrastructure via single public facing jump / bastion node (using [sshuttle](https://github.com/sshuttle/sshuttle#readme) or [ssh ProxyJump](https://www.jeffgeerling.com/blog/2022/using-ansible-playbook-ssh-bastion-jump-host) or eventually [ssh ProxyCommand](https://blog.ruanbekker.com/blog/2020/10/26/use-a-ssh-jump-host-with-ansible/))
* use OpenStack API to watch whether VM is ACTIVE
* relax public IP try-connect loop timing
* configure SSH client to [reuse connection for instance with `-o ControlMaster=auto -o ControlPersist=60s`](https://en.wikibooks.org/wiki/OpenSSH/Cookbook/Multiplexing)
#### Example
As an example, consider a group of virtual machines, where at least one has access to the internet using an IPv4 or IPv6 public address, and they are connected by an internal network (e.g. 10.0.0.0/24).
To access the first VM with a public address `<public-ip-jump>`:
* Wait for the machine to enter ACTIVE state via Openstack API: `openstack server show <openstack-server-id> -f json | jq -r .status`.
* After VM is in ACTIVE state try to open connection to SSH port with timeout of approx. 5 seconds and period of at least 30 seconds.
To access other VMs on the same cloud internal network (once ssh connection to 1st is established):
* The recommended method is to create an SSH VPN using sshuttle with `sshuttle -r user@<public-ip-jump> 10.0.0.0/24`
* Address all internal virtual servers with their internal address (CIDR 10.0.0.0/24) and use the 1st (jump / bastion) machine with the public address as an SSH proxy.
* Follow the same steps to connect – first wait for ACTIVE state and then try a port connection.
### How to check, whether you are blocked
Run the following bash script from the machine, where you believe you got blocked (A), and also from another one located in another IP network segment (B, for instance VM in other cloud):
```sh
# Test Cloud Accessibility for a linux or Windows WSDL 2 environments
# BASH function requires following tools to be installed:
# ip, host tracepath traceroute ping, curl, ncat, timeout, bash
# Execution example: test_cloud_access 178.128.250.99 22
function test_cloud_access() {
local basion_vm_public_ip="$1"
local basion_vm_public_port="${2:-22}"
local cloud_identity_host=${3:-identity.cloud.muni.cz}
local timeout=60
set -x
cmds=("ip a" "ip -4 r l" "ip -6 r l")
for i_cmd in "${cmds[@]}"; do
${i_cmd}; echo "ecode:$?";
done
for i_cmd in host tracepath traceroute ping ; do
timeout --signal=2 ${timeout} ${i_cmd} "${cloud_identity_host}"
echo "ecode:$?"
done
timeout --signal=2 ${timeout} curl -v "https://${cloud_identity_host}"
echo "ecode:$?"
timeout --signal=2 ${timeout} ncat -z "${basion_vm_public_ip}" "${basion_vm_public_port}"
echo "ecode:$?"
set +x
}
```
### How to report network issue and get unblocked
If You are suspecting, that Your virtual machines are blocked, You should contact support by sending an email to the address cloud@metacentrum.cz. To make things easier and resolve the issue faster, it is important to add the outputs of the bash function `test_cloud_access()` above, ran from both VMs (A and B).
---
title: "Flavors"
date: 2022-05-20T09:05:00+02:00
draft: false
disableToc: true
GeekdocHidden: true
---
On this page you can find the list of offered flavors in Metacentrum Cloud.
*Data in this table may not be up-to-date.*
{{< csv-table header="true">}}
Flavor name,CPU,RAM (in GB),HPC,SSD,Disc throughput (in MB per second),IOPS,Net average throughput (in MB per second),GPU
elixir.hda1,30,724,Yes,No,Unlimited,Unlimited,Unlimited,No
elixir.hda1-10core-240ram,10,240,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.16core-128ram,16,128,Yes,No,524.288,2000,2000.0,No
hpc.16core-32ram,16,32,Yes,No,524.288,2000,2000.0,No
hpc.16core-64ram-ssd-ephem,16,64,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.18core-48ram,18,48,Yes,No,524.288,2000,2000.0,No
hpc.19core-176ram-nvidia-1080-glados,19,176,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.19core-176ram-nvidia-2080-glados,19,176,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.24core-256ram-ssd-ephem,24,256,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.24core-96ram-ssd-ephem,24,96,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.30core-128ram-ssd-ephem-500,30,128,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.30core-256ram,30,256,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.30core-64ram,30,64,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.32core-256ram-nvidia-t4-single-gpu,32,240,Yes,No,Unlimited,Unlimited,Unlimited,Yes
hpc.38core-372ram-nvidia-1080-glados,38,352,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.38core-372ram-nvidia-2080-glados,38,352,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.38core-372ram-ssd-ephem,38,372,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.40core-372ram-nvidia-1080-glados,40,352,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.40core-372ram-nvidia-2080-glados,40,352,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.40core-372ram-nvidia-titan-glados,40,352,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.4core-16ram-ssd-ephem,4,16,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.4core-16ram-ssd-ephem-500,4,16,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.64core-512ram-nvidia-t4,64,480,Yes,No,Unlimited,Unlimited,Unlimited,Yes
hpc.8core-16ram,8,16,Yes,No,524.288,2000,2000.0,No
hpc.8core-32ram-ssd-ephem,8,32,Yes,Yes,Unlimited,Unlimited,1250.0,No
hpc.8core-32ram-ssd-rcx-ephem,8,32,Yes,Yes,Unlimited,Unlimited,Unlimited,No
hpc.8core-64ram,8,64,Yes,No,524.288,2000,2000.0,No
hpc.8core-64ram-nvidia-1080-glados,8,64,Yes,Yes,Unlimited,Unlimited,Unlimited,Yes
hpc.hdh,32,480,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.hdh-8cpu-120ram,8,120,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.hdh-ephem,32,480,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.ics-gladosag-full,38,372,Yes,No,Unlimited,Unlimited,Unlimited,No
hpc.large,16,64,Yes,No,524.288,2000,2000.0,No
hpc.medium,8,32,Yes,No,524.288,2000,2000.0,No
hpc.nvidia-2080-hdg-16cpu-236ram-ephem,16,236,Yes,No,Unlimited,Unlimited,Unlimited,Yes
hpc.nvidia-2080-hdg-ephem,32,448,Yes,No,Unlimited,Unlimited,Unlimited,Yes
hpc.nvidia-2080-hdg-half-ephem,16,238,Yes,No,Unlimited,Unlimited,Unlimited,Yes
hpc.small,4,16,Yes,No,524.288,2000,2000.0,No
hpc.xlarge,24,96,Yes,No,524.288,2000,2000.0,No
hpc.xlarge-memory,24,256,Yes,No,Unlimited,Unlimited,Unlimited,No
standard.12core-24ram,12,24,No,No,262.144,2000,625.0,No
standard.16core-32ram,16,32,No,No,262.144,2000,625.0,No
standard.20core-128ram,20,128,No,No,262.144,2000,250.0,No
standard.20core-160ram,20,160,No,No,262.144,2000,1250.0,No
standard.20core-256ram,20,256,No,No,262.144,2000,1250.0,No
standard.2core-16ram,2,16,No,No,262.144,2000,250.0,No
standard.large,4,8,No,No,262.144,2000,250.0,No
standard.medium,2,4,No,No,262.144,2000,250.0,No
standard.memory,2,32,No,No,262.144,2000,250.0,No
standard.one-to-many,20,64,No,No,262.144,2000,250.0,No
standard.small,1,2,No,No,262.144,2000,250.0,No
standard.tiny,1,1,No,No,262.144,2000,250.0,No
standard.xlarge,4,16,No,No,262.144,2000,250.0,No
standard.xlarge-cpu,8,16,No,No,262.144,2000,250.0,No
standard.xxlarge,8,32,No,No,262.144,2000,250.0,No
standard.xxxlarge,8,64,No,No,262.144,2000,250.0,No
{{</csv-table>}}
---
title: "GPUs"
date: 2022-05-20T09:05:00+02:00
draft: false
disableToc: true
GeekdocHidden: true
---
On this page you can find static list of offered GPUs in Metacentrum Cloud.
{{< csv-table header="true">}}
GPU, Total nodes, GPUs per node
NVIDIA Tesla T4, 16, 2
NVIDIA A40 (**), 2, 4
NVIDIA TITAN V, 1, 1
NVIDIA GeForce GTX 1080 Ti (*), 8, 2
NVIDIA GeForce GTX 2080 (*), 9, 2
NVIDIA GeForce GTX 2080 Ti (*), 14, 2
{{</csv-table>}}
Notes:
- (*) experimental use in academic environment.
- (**) There are currently operating system limitation of VM servers attaching GPU device. Supported are Debian 10 and Centos 7.
Current GPU usage can be viewed on [GPU overview dashboard](https://grafana1.cloud.muni.cz/d/J66duZjnk/openstack-gpu-resource-overview) (valid e-infra / MUNI identity needed).
---
title: "Image Rotation News"
date: 2022-01-23T20:31:35+02:00
draft: false
disableToc: true
---
content/cloud/network/images/1.png

10 KiB

content/cloud/network/images/2.png

5.35 KiB

content/cloud/network/images/3.png

25.9 KiB

content/cloud/network/images/4.png

46.7 KiB

content/cloud/network/images/5.png

24.7 KiB