Skip to content
Snippets Groups Projects
Verified Commit 1f0bc08a authored by Dominik Vašek's avatar Dominik Vašek
Browse files

remove old lb info, add gpu stats to flavors

parent a533c411
No related branches found
No related tags found
1 merge request!107remove old lb info, add gpu stats to flavors
......@@ -182,67 +182,6 @@ For instances, that require high throughput and IOPS, it is possible to utilize
Affinity policy is tool users can use to deploy nodes of a cluster on the same physical machine or if they should be spread among other physical machines. This can be beneficial if you need fast communication between nodes or you need them to be spread due to load-balancing or high availability etc. For more info please refer to [https://docs.openstack.org/senlin/train/scenarios/affinity.html](https://docs.openstack.org/senlin/train/scenarios/affinity.html).
## LBaaS - OpenStack Octavia
Load Balancer is a tool used for distributing a set of tasks over a particular set of resources. Its main goal is to route requests towards redundant backend services in high-availability scenario.
In the following example, you can see how a basic HTTP server is deployed via CLI.
**Requirements**:
- 2 instances connected to the same internal subnet and configured with HTTP application on TCP port 80
```
openstack loadbalancer create --name my_lb --vip-subnet-id <external_subnet_ID>
```
where **<external_subnet_ID>** is an ID of external shared subnet created by cloud admins reachable from the Internet.
You can check the newly created Load Balancer by running the following command:
```
openstack loadbalancer show my_lb
```
Now you must create a listener on port 80 to enable incoming traffic by the following command:
```
openstack loadbalancer listener create --name listener_http --protocol HTTP --protocol-port 80 my_lb
```
Now you must add a pool on the created listener to set up the configuration for Load Balancer. You can do it by the following command:
```
openstack loadbalancer pool create --name pool_http --lb-algorithm ROUND_ROBIN --listener listener_http --protocol HTTP
```
Here you created a pool using the Round Robin algorithm for load balancing.
And now you must configure both nodes to join to Load Balancer:
```
openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.15 --protocol-port 80 pool_http
openstack loadbalancer member create --subnet-id <internal_subnet_ID> --address 192.168.50.16 --protocol-port 80 pool_http
```
where **<internal_subnet_ID>** is an ID of internal subnet used by your instances and **--address** specifies an address of the concrete instance.
For more info, please refer to [https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html#basic-lb-with-hm-and-fip](https://docs.openstack.org/octavia/train/user/guides/basic-cookbook.html#basic-lb-with-hm-and-fip).
{{<hint info>}}
**NOTICE:**
Sometimes it can happen that Load Balancer is working but the connection is not working because it is not added into security groups. So to prevent this don't forget to apply neutron security group to amphorae created on the LB network to allow traffic to reach the configured load balancer. See [the load balancer deployment walkthrough](https://docs.openstack.org/octavia/train/contributor/guides/dev-quick-start.html?highlight=security%20group#production-deployment-walkthrough) for more details.
{{</hint>}}
LBaaS (Load Balancer as a service) provides the user with a load balancing service, that can be fully managed via OpenStack API (some basic tasks are supported by GUI). Core benefits:
* creation and management of load balancer resources can be easily automatized via API or existing tools like Ansible or Terraform
* applications can be easily scaled by starting up more OpenStack instances and registering them into the load balancer
* public IPv4 addresses saving - you can deploy one load balancer with one public IP and serve multiple services on multiple pools of instances by TCP/UDP port or L7 policies
**This feature is provided as it is and configuration is entirely the responsibility of the user.**
Official documentation for LBaaS (Octavia) service - https://docs.openstack.org/octavia/latest/user/index.html
## Cloud orchestration tools
### Terraform
......
---
title: "Flavors"
date: 2021-05-18T11:22:35+02:00
date: 2022-05-20T09:05:00+02:00
draft: false
disableToc: true
GeekdocHidden: true
---
In this guide you can find info about all flavors available in Metacentrum Cloud.
On this page you can find the list of offered flavors and gpus in Metacentrum Cloud.
*Data in this table may not be up-to-date. Current information about flavors is placed in the documentation by CI.*
*Data in this table may not be up-to-date.*
{{< csv-table header="true">}}
......@@ -71,4 +70,20 @@ standard.xxlarge,8,32,No,No,262.144,2000,250.0,No
standard.xxxlarge,8,64,No,No,262.144,2000,250.0,No
{{</csv-table>}}
\ No newline at end of file
{{</csv-table>}}
## GPUs
{{< csv-table header="true">}}
GPU, Total nodes, GPUs per node
NVIDIA Tesla T4, 16, 2
NVIDIA A40, 2, 4
NVIDIA TITAN V, 1, 1
NVIDIA GeForce GTX 1080 Ti*, 8, 2
NVIDIA GeForce GTX 2080*, 9, 2
NVIDIA GeForce GTX 2080 Ti*, 14, 2
{{</csv-table>}}
*experimental use in academic environment.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment